Test Report: Docker_Linux_containerd 21969

                    
                      ab0a8cfdd326918695f502976b3bdb249954a688:2025-11-23:42465
                    
                

Test fail (4/333)

Order failed test Duration
303 TestStartStop/group/old-k8s-version/serial/DeployApp 14.15
306 TestStartStop/group/no-preload/serial/DeployApp 11.84
328 TestStartStop/group/embed-certs/serial/DeployApp 15.88
329 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 15.9
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (14.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-204346 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [85a1fcd5-ee10-4749-9dec-40efed82eb3e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [85a1fcd5-ee10-4749-9dec-40efed82eb3e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.002934355s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-204346 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-204346
helpers_test.go:243: (dbg) docker inspect old-k8s-version-204346:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74b9ec6867739b46c46d250281e773e2e1e6e55633355a3143f6c35242c78716",
	        "Created": "2025-11-23T08:43:13.914336238Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 255015,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:43:13.954859222Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/74b9ec6867739b46c46d250281e773e2e1e6e55633355a3143f6c35242c78716/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74b9ec6867739b46c46d250281e773e2e1e6e55633355a3143f6c35242c78716/hostname",
	        "HostsPath": "/var/lib/docker/containers/74b9ec6867739b46c46d250281e773e2e1e6e55633355a3143f6c35242c78716/hosts",
	        "LogPath": "/var/lib/docker/containers/74b9ec6867739b46c46d250281e773e2e1e6e55633355a3143f6c35242c78716/74b9ec6867739b46c46d250281e773e2e1e6e55633355a3143f6c35242c78716-json.log",
	        "Name": "/old-k8s-version-204346",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-204346:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-204346",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "74b9ec6867739b46c46d250281e773e2e1e6e55633355a3143f6c35242c78716",
	                "LowerDir": "/var/lib/docker/overlay2/c1a2c09b9684904e47b03e9569e26d403b09f5d541f2cb59b94c6e639ed9b4e3-init/diff:/var/lib/docker/overlay2/ee04ca8b85d0dedeb02bd9a5189a59a7f53ca89a011d262a78df32fa43bf0598/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c1a2c09b9684904e47b03e9569e26d403b09f5d541f2cb59b94c6e639ed9b4e3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c1a2c09b9684904e47b03e9569e26d403b09f5d541f2cb59b94c6e639ed9b4e3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c1a2c09b9684904e47b03e9569e26d403b09f5d541f2cb59b94c6e639ed9b4e3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-204346",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-204346/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-204346",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-204346",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-204346",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "db03bea2ae002bb3595102e41f0b3c5dd373e7f121cbf490c03f867ac8b10fc2",
	            "SandboxKey": "/var/run/docker/netns/db03bea2ae00",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-204346": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2c3268f545c0648cec3972c75676102d767b9cbd699aea51b301ba1de04cad51",
	                    "EndpointID": "a6fed4b2c7bb6c663b8e774c8e64911b07fef263695c45641973d777a7144fb2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "1a:83:9b:a0:7e:0e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-204346",
	                        "74b9ec686773"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-204346 -n old-k8s-version-204346
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-204346 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-204346 logs -n 25: (1.058694638s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ start   │ -p force-systemd-flag-570956 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-570956 │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ start   │ -p NoKubernetes-846693 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                         │ NoKubernetes-846693       │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ ssh     │ -p NoKubernetes-846693 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-846693       │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │                     │
	│ ssh     │ force-systemd-env-352249 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-352249  │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ delete  │ -p force-systemd-env-352249                                                                                                                                                                                                                         │ force-systemd-env-352249  │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ start   │ -p cert-expiration-680868 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-680868    │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:42 UTC │
	│ ssh     │ force-systemd-flag-570956 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-570956 │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ delete  │ -p force-systemd-flag-570956                                                                                                                                                                                                                        │ force-systemd-flag-570956 │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ start   │ -p cert-options-194967 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-194967       │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:42 UTC │
	│ stop    │ -p NoKubernetes-846693                                                                                                                                                                                                                              │ NoKubernetes-846693       │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ start   │ -p NoKubernetes-846693 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-846693       │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:42 UTC │
	│ ssh     │ -p NoKubernetes-846693 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-846693       │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │                     │
	│ delete  │ -p NoKubernetes-846693                                                                                                                                                                                                                              │ NoKubernetes-846693       │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ start   │ -p missing-upgrade-231159 --memory=3072 --driver=docker  --container-runtime=containerd                                                                                                                                                             │ missing-upgrade-231159    │ jenkins │ v1.32.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ ssh     │ cert-options-194967 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-194967       │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ ssh     │ -p cert-options-194967 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-194967       │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ delete  │ -p cert-options-194967                                                                                                                                                                                                                              │ cert-options-194967       │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ start   │ -p stopped-upgrade-595653 --memory=3072 --vm-driver=docker  --container-runtime=containerd                                                                                                                                                          │ stopped-upgrade-595653    │ jenkins │ v1.32.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ start   │ -p missing-upgrade-231159 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                                                      │ missing-upgrade-231159    │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:43 UTC │
	│ stop    │ stopped-upgrade-595653 stop                                                                                                                                                                                                                         │ stopped-upgrade-595653    │ jenkins │ v1.32.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ start   │ -p stopped-upgrade-595653 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                                                      │ stopped-upgrade-595653    │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p stopped-upgrade-595653                                                                                                                                                                                                                           │ stopped-upgrade-595653    │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ start   │ -p old-k8s-version-204346 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-204346    │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p missing-upgrade-231159                                                                                                                                                                                                                           │ missing-upgrade-231159    │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ start   │ -p no-preload-999106 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-999106         │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:43:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:43:27.495640  258086 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:43:27.495743  258086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:43:27.495751  258086 out.go:374] Setting ErrFile to fd 2...
	I1123 08:43:27.495755  258086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:43:27.495953  258086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
	I1123 08:43:27.496394  258086 out.go:368] Setting JSON to false
	I1123 08:43:27.497504  258086 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5148,"bootTime":1763882259,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:43:27.497559  258086 start.go:143] virtualization: kvm guest
	I1123 08:43:27.499449  258086 out.go:179] * [no-preload-999106] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:43:27.500767  258086 notify.go:221] Checking for updates...
	I1123 08:43:27.500781  258086 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:43:27.502005  258086 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:43:27.503191  258086 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-13876/kubeconfig
	I1123 08:43:27.504274  258086 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-13876/.minikube
	I1123 08:43:27.505281  258086 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:43:27.506287  258086 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:43:27.507765  258086 config.go:182] Loaded profile config "cert-expiration-680868": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:43:27.507859  258086 config.go:182] Loaded profile config "kubernetes-upgrade-776670": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:43:27.507939  258086 config.go:182] Loaded profile config "old-k8s-version-204346": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 08:43:27.508012  258086 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:43:27.532390  258086 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:43:27.532462  258086 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:43:27.588863  258086 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-23 08:43:27.578321532 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:43:27.588959  258086 docker.go:319] overlay module found
	I1123 08:43:27.590837  258086 out.go:179] * Using the docker driver based on user configuration
	I1123 08:43:27.592139  258086 start.go:309] selected driver: docker
	I1123 08:43:27.592164  258086 start.go:927] validating driver "docker" against <nil>
	I1123 08:43:27.592175  258086 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:43:27.592773  258086 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:43:27.653421  258086 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-23 08:43:27.643267927 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:43:27.653668  258086 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:43:27.653954  258086 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:43:27.655624  258086 out.go:179] * Using Docker driver with root privileges
	I1123 08:43:27.656995  258086 cni.go:84] Creating CNI manager for ""
	I1123 08:43:27.657071  258086 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:43:27.657084  258086 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:43:27.657159  258086 start.go:353] cluster config:
	{Name:no-preload-999106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-999106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:43:27.658480  258086 out.go:179] * Starting "no-preload-999106" primary control-plane node in "no-preload-999106" cluster
	I1123 08:43:27.659678  258086 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:43:27.660749  258086 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:43:27.661680  258086 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:43:27.661748  258086 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:43:27.661771  258086 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/config.json ...
	I1123 08:43:27.661801  258086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/config.json: {Name:mk1854d74e572dba5e78564093e1183622e9aa74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:27.661927  258086 cache.go:107] acquiring lock: {Name:mka7418a84f8d9aaa890eb7bcafd158f0f845949 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:43:27.661970  258086 cache.go:107] acquiring lock: {Name:mke646091201bbef396ff67d16f0cce49990b355 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:43:27.661948  258086 cache.go:107] acquiring lock: {Name:mk929bb8e7363fd9f8d602565b078a816979b3d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:43:27.661979  258086 cache.go:107] acquiring lock: {Name:mk667c169463661b7e999b395cc2d348440d0d0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:43:27.662058  258086 cache.go:115] /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1123 08:43:27.662070  258086 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:27.662087  258086 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:27.662069  258086 cache.go:107] acquiring lock: {Name:mk4a8ffda79c57b59d9ec0be62cf6989cc0b3dc8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:43:27.662104  258086 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:27.662089  258086 cache.go:107] acquiring lock: {Name:mkce85e18a9851767cd13073008b6382df083ea3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:43:27.662080  258086 cache.go:107] acquiring lock: {Name:mk495076811ea27b7ee848ef73ebf58029c788de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:43:27.662200  258086 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:27.662257  258086 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:27.662073  258086 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 113.368µs
	I1123 08:43:27.662298  258086 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1123 08:43:27.662298  258086 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:27.662338  258086 cache.go:107] acquiring lock: {Name:mkc513b15aec17d5c3e77aa2e6131827198f8c26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:43:27.662430  258086 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1123 08:43:27.663312  258086 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:27.663446  258086 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:27.663495  258086 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:27.663529  258086 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:27.663560  258086 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1123 08:43:27.663553  258086 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:27.663602  258086 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:27.683115  258086 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:43:27.683133  258086 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:43:27.683151  258086 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:43:27.683188  258086 start.go:360] acquireMachinesLock for no-preload-999106: {Name:mk535dea2e363deaa61ac9c5041ac2d499c9efc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:43:27.683286  258086 start.go:364] duration metric: took 77.877µs to acquireMachinesLock for "no-preload-999106"
	I1123 08:43:27.683314  258086 start.go:93] Provisioning new machine with config: &{Name:no-preload-999106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-999106 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:43:27.683378  258086 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:43:23.886201  254114 out.go:252]   - Booting up control plane ...
	I1123 08:43:23.886286  254114 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:43:23.886377  254114 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:43:23.886992  254114 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:43:23.903197  254114 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:43:23.904138  254114 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:43:23.904196  254114 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:43:24.010365  254114 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1123 08:43:28.512514  254114 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.502224 seconds
	I1123 08:43:28.512707  254114 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:43:28.525209  254114 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:43:29.051871  254114 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:43:29.052189  254114 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-204346 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:43:29.563746  254114 kubeadm.go:319] [bootstrap-token] Using token: kv40xr.vpl4w4wq1fqvcjbv
	I1123 08:43:29.565119  254114 out.go:252]   - Configuring RBAC rules ...
	I1123 08:43:29.565274  254114 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:43:29.570668  254114 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:43:29.578425  254114 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:43:29.581516  254114 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:43:29.584593  254114 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:43:29.588395  254114 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:43:29.599565  254114 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:43:29.809875  254114 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:43:29.974613  254114 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:43:29.975627  254114 kubeadm.go:319] 
	I1123 08:43:29.975755  254114 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:43:29.975777  254114 kubeadm.go:319] 
	I1123 08:43:29.975879  254114 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:43:29.975889  254114 kubeadm.go:319] 
	I1123 08:43:29.975929  254114 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:43:29.976013  254114 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:43:29.976095  254114 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:43:29.976109  254114 kubeadm.go:319] 
	I1123 08:43:29.976189  254114 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:43:29.976197  254114 kubeadm.go:319] 
	I1123 08:43:29.976265  254114 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:43:29.976274  254114 kubeadm.go:319] 
	I1123 08:43:29.976365  254114 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:43:29.976483  254114 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:43:29.976577  254114 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:43:29.976584  254114 kubeadm.go:319] 
	I1123 08:43:29.976725  254114 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:43:29.976849  254114 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:43:29.976864  254114 kubeadm.go:319] 
	I1123 08:43:29.976980  254114 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kv40xr.vpl4w4wq1fqvcjbv \
	I1123 08:43:29.977124  254114 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5c48a3b11504a9c7a5d242d913eadf6a5354a8cb06c9ffcf8385d22efb04d8fa \
	I1123 08:43:29.977157  254114 kubeadm.go:319] 	--control-plane 
	I1123 08:43:29.977166  254114 kubeadm.go:319] 
	I1123 08:43:29.977310  254114 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:43:29.977319  254114 kubeadm.go:319] 
	I1123 08:43:29.977452  254114 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kv40xr.vpl4w4wq1fqvcjbv \
	I1123 08:43:29.977614  254114 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5c48a3b11504a9c7a5d242d913eadf6a5354a8cb06c9ffcf8385d22efb04d8fa 
	I1123 08:43:29.980159  254114 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 08:43:29.980378  254114 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:43:29.980409  254114 cni.go:84] Creating CNI manager for ""
	I1123 08:43:29.980425  254114 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:43:29.984213  254114 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:43:27.685925  258086 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:43:27.686123  258086 start.go:159] libmachine.API.Create for "no-preload-999106" (driver="docker")
	I1123 08:43:27.686177  258086 client.go:173] LocalClient.Create starting
	I1123 08:43:27.686233  258086 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem
	I1123 08:43:27.686260  258086 main.go:143] libmachine: Decoding PEM data...
	I1123 08:43:27.686276  258086 main.go:143] libmachine: Parsing certificate...
	I1123 08:43:27.686316  258086 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-13876/.minikube/certs/cert.pem
	I1123 08:43:27.686334  258086 main.go:143] libmachine: Decoding PEM data...
	I1123 08:43:27.686346  258086 main.go:143] libmachine: Parsing certificate...
	I1123 08:43:27.686738  258086 cli_runner.go:164] Run: docker network inspect no-preload-999106 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:43:27.705175  258086 cli_runner.go:211] docker network inspect no-preload-999106 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:43:27.705249  258086 network_create.go:284] running [docker network inspect no-preload-999106] to gather additional debugging logs...
	I1123 08:43:27.705267  258086 cli_runner.go:164] Run: docker network inspect no-preload-999106
	W1123 08:43:27.723756  258086 cli_runner.go:211] docker network inspect no-preload-999106 returned with exit code 1
	I1123 08:43:27.723782  258086 network_create.go:287] error running [docker network inspect no-preload-999106]: docker network inspect no-preload-999106: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-999106 not found
	I1123 08:43:27.723796  258086 network_create.go:289] output of [docker network inspect no-preload-999106]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-999106 not found
	
	** /stderr **
	I1123 08:43:27.723894  258086 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:43:27.742266  258086 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5d8b9fdde185 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:76:1f:2b:8a:58:68} reservation:<nil>}
	I1123 08:43:27.742817  258086 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-103255eb2e92 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:bb:33:85:24:bc} reservation:<nil>}
	I1123 08:43:27.743314  258086 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fa9f597fddc6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b6:bb:01:5e:01:61} reservation:<nil>}
	I1123 08:43:27.743832  258086 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-da43b5ed9d8a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:8e:fe:29:08:73:55} reservation:<nil>}
	I1123 08:43:27.744448  258086 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c01e10}
	I1123 08:43:27.744470  258086 network_create.go:124] attempt to create docker network no-preload-999106 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 08:43:27.744518  258086 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-999106 no-preload-999106
	I1123 08:43:27.793693  258086 network_create.go:108] docker network no-preload-999106 192.168.85.0/24 created
	I1123 08:43:27.793726  258086 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-999106" container
	I1123 08:43:27.793798  258086 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:43:27.815508  258086 cli_runner.go:164] Run: docker volume create no-preload-999106 --label name.minikube.sigs.k8s.io=no-preload-999106 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:43:27.836788  258086 oci.go:103] Successfully created a docker volume no-preload-999106
	I1123 08:43:27.836929  258086 cli_runner.go:164] Run: docker run --rm --name no-preload-999106-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-999106 --entrypoint /usr/bin/test -v no-preload-999106:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:43:27.851417  258086 cache.go:162] opening:  /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1123 08:43:27.858908  258086 cache.go:162] opening:  /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1123 08:43:27.860347  258086 cache.go:162] opening:  /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1123 08:43:27.863442  258086 cache.go:162] opening:  /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1123 08:43:27.865314  258086 cache.go:162] opening:  /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1123 08:43:27.878248  258086 cache.go:162] opening:  /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1123 08:43:27.889986  258086 cache.go:162] opening:  /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1123 08:43:27.973948  258086 cache.go:157] /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1123 08:43:27.973981  258086 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 311.645455ms
	I1123 08:43:27.973999  258086 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1123 08:43:28.304822  258086 cache.go:157] /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1123 08:43:28.304856  258086 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 642.854298ms
	I1123 08:43:28.304870  258086 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1123 08:43:28.332384  258086 oci.go:107] Successfully prepared a docker volume no-preload-999106
	I1123 08:43:28.332436  258086 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	W1123 08:43:28.332544  258086 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 08:43:28.332582  258086 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 08:43:28.332628  258086 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:43:28.401507  258086 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-999106 --name no-preload-999106 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-999106 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-999106 --network no-preload-999106 --ip 192.168.85.2 --volume no-preload-999106:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:43:28.713710  258086 cli_runner.go:164] Run: docker container inspect no-preload-999106 --format={{.State.Running}}
	I1123 08:43:28.734068  258086 cli_runner.go:164] Run: docker container inspect no-preload-999106 --format={{.State.Status}}
	I1123 08:43:28.754748  258086 cli_runner.go:164] Run: docker exec no-preload-999106 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:43:28.804354  258086 oci.go:144] the created container "no-preload-999106" has a running status.
	I1123 08:43:28.804388  258086 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-13876/.minikube/machines/no-preload-999106/id_rsa...
	I1123 08:43:28.861878  258086 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-13876/.minikube/machines/no-preload-999106/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:43:28.899755  258086 cli_runner.go:164] Run: docker container inspect no-preload-999106 --format={{.State.Status}}
	I1123 08:43:28.921384  258086 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:43:28.921408  258086 kic_runner.go:114] Args: [docker exec --privileged no-preload-999106 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:43:28.971140  258086 cli_runner.go:164] Run: docker container inspect no-preload-999106 --format={{.State.Status}}
	I1123 08:43:28.992543  258086 machine.go:94] provisionDockerMachine start ...
	I1123 08:43:28.992659  258086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-999106
	I1123 08:43:29.017873  258086 main.go:143] libmachine: Using SSH client type: native
	I1123 08:43:29.018228  258086 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1123 08:43:29.018252  258086 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:43:29.019229  258086 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57704->127.0.0.1:33063: read: connection reset by peer
	I1123 08:43:29.339938  258086 cache.go:157] /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1123 08:43:29.339967  258086 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.677878189s
	I1123 08:43:29.339993  258086 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1123 08:43:29.349964  258086 cache.go:157] /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1123 08:43:29.349997  258086 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.688022096s
	I1123 08:43:29.350017  258086 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1123 08:43:29.423577  258086 cache.go:157] /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1123 08:43:29.423607  258086 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.761664135s
	I1123 08:43:29.423620  258086 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1123 08:43:29.487535  258086 cache.go:157] /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1123 08:43:29.487565  258086 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.825655813s
	I1123 08:43:29.487576  258086 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1123 08:43:29.829693  258086 cache.go:157] /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1123 08:43:29.829727  258086 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.16770936s
	I1123 08:43:29.829741  258086 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1123 08:43:29.829763  258086 cache.go:87] Successfully saved all images to host disk.
	I1123 08:43:32.164591  258086 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-999106
	
	I1123 08:43:32.164618  258086 ubuntu.go:182] provisioning hostname "no-preload-999106"
	I1123 08:43:32.164701  258086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-999106
	I1123 08:43:32.183134  258086 main.go:143] libmachine: Using SSH client type: native
	I1123 08:43:32.183339  258086 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1123 08:43:32.183352  258086 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-999106 && echo "no-preload-999106" | sudo tee /etc/hostname
	I1123 08:43:32.340889  258086 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-999106
	
	I1123 08:43:32.340971  258086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-999106
	I1123 08:43:32.359419  258086 main.go:143] libmachine: Using SSH client type: native
	I1123 08:43:32.359677  258086 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1123 08:43:32.359696  258086 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-999106' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-999106/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-999106' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:43:29.985991  254114 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:43:29.990966  254114 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1123 08:43:29.990985  254114 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:43:30.005005  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:43:30.649440  254114 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:43:30.649546  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:30.649581  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-204346 minikube.k8s.io/updated_at=2025_11_23T08_43_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=old-k8s-version-204346 minikube.k8s.io/primary=true
	I1123 08:43:30.659700  254114 ops.go:34] apiserver oom_adj: -16
	I1123 08:43:30.729410  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:31.230340  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:31.730113  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:32.230535  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:32.729772  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:32.505327  258086 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:43:32.505361  258086 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-13876/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-13876/.minikube}
	I1123 08:43:32.505408  258086 ubuntu.go:190] setting up certificates
	I1123 08:43:32.505430  258086 provision.go:84] configureAuth start
	I1123 08:43:32.505484  258086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-999106
	I1123 08:43:32.523951  258086 provision.go:143] copyHostCerts
	I1123 08:43:32.524019  258086 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-13876/.minikube/key.pem, removing ...
	I1123 08:43:32.524033  258086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-13876/.minikube/key.pem
	I1123 08:43:32.524115  258086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-13876/.minikube/key.pem (1675 bytes)
	I1123 08:43:32.524235  258086 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-13876/.minikube/ca.pem, removing ...
	I1123 08:43:32.524248  258086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-13876/.minikube/ca.pem
	I1123 08:43:32.524289  258086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-13876/.minikube/ca.pem (1078 bytes)
	I1123 08:43:32.524373  258086 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-13876/.minikube/cert.pem, removing ...
	I1123 08:43:32.524383  258086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-13876/.minikube/cert.pem
	I1123 08:43:32.524416  258086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-13876/.minikube/cert.pem (1123 bytes)
	I1123 08:43:32.524499  258086 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-13876/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca-key.pem org=jenkins.no-preload-999106 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-999106]
	I1123 08:43:32.587554  258086 provision.go:177] copyRemoteCerts
	I1123 08:43:32.587609  258086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:43:32.587655  258086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-999106
	I1123 08:43:32.605984  258086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/no-preload-999106/id_rsa Username:docker}
	I1123 08:43:32.708249  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 08:43:32.727969  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:43:32.747752  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:43:32.766001  258086 provision.go:87] duration metric: took 260.555897ms to configureAuth
	I1123 08:43:32.766029  258086 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:43:32.766187  258086 config.go:182] Loaded profile config "no-preload-999106": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:43:32.766198  258086 machine.go:97] duration metric: took 3.773633247s to provisionDockerMachine
	I1123 08:43:32.766204  258086 client.go:176] duration metric: took 5.080019183s to LocalClient.Create
	I1123 08:43:32.766223  258086 start.go:167] duration metric: took 5.080101552s to libmachine.API.Create "no-preload-999106"
	I1123 08:43:32.766232  258086 start.go:293] postStartSetup for "no-preload-999106" (driver="docker")
	I1123 08:43:32.766242  258086 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:43:32.766283  258086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:43:32.766317  258086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-999106
	I1123 08:43:32.785085  258086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/no-preload-999106/id_rsa Username:docker}
	I1123 08:43:32.889673  258086 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:43:32.893433  258086 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:43:32.893459  258086 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:43:32.893470  258086 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-13876/.minikube/addons for local assets ...
	I1123 08:43:32.893520  258086 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-13876/.minikube/files for local assets ...
	I1123 08:43:32.893624  258086 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-13876/.minikube/files/etc/ssl/certs/174422.pem -> 174422.pem in /etc/ssl/certs
	I1123 08:43:32.893761  258086 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:43:32.902075  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/files/etc/ssl/certs/174422.pem --> /etc/ssl/certs/174422.pem (1708 bytes)
	I1123 08:43:32.921898  258086 start.go:296] duration metric: took 155.652278ms for postStartSetup
	I1123 08:43:32.922243  258086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-999106
	I1123 08:43:32.940711  258086 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/config.json ...
	I1123 08:43:32.940999  258086 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:43:32.941041  258086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-999106
	I1123 08:43:32.959311  258086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/no-preload-999106/id_rsa Username:docker}
	I1123 08:43:33.058968  258086 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:43:33.063670  258086 start.go:128] duration metric: took 5.380278318s to createHost
	I1123 08:43:33.063696  258086 start.go:83] releasing machines lock for "no-preload-999106", held for 5.380396187s
	I1123 08:43:33.063776  258086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-999106
	I1123 08:43:33.082497  258086 ssh_runner.go:195] Run: cat /version.json
	I1123 08:43:33.082555  258086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-999106
	I1123 08:43:33.082576  258086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:43:33.082676  258086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-999106
	I1123 08:43:33.101516  258086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/no-preload-999106/id_rsa Username:docker}
	I1123 08:43:33.101929  258086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/no-preload-999106/id_rsa Username:docker}
	I1123 08:43:33.258150  258086 ssh_runner.go:195] Run: systemctl --version
	I1123 08:43:33.265003  258086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:43:33.270133  258086 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:43:33.270202  258086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:43:33.301093  258086 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 08:43:33.301114  258086 start.go:496] detecting cgroup driver to use...
	I1123 08:43:33.301140  258086 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:43:33.301187  258086 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:43:33.316380  258086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:43:33.328339  258086 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:43:33.328388  258086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:43:33.344573  258086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:43:33.362321  258086 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:43:33.449438  258086 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:43:33.532610  258086 docker.go:234] disabling docker service ...
	I1123 08:43:33.532689  258086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:43:33.551827  258086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:43:33.564985  258086 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:43:33.650121  258086 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:43:33.736173  258086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:43:33.749245  258086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:43:33.764351  258086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 08:43:33.774567  258086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:43:33.784258  258086 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1123 08:43:33.784327  258086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1123 08:43:33.794411  258086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:43:33.804033  258086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:43:33.812857  258086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:43:33.821787  258086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:43:33.829930  258086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:43:33.839002  258086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:43:33.847926  258086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:43:33.856822  258086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:43:33.864542  258086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:43:33.871885  258086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:43:33.950854  258086 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:43:34.024458  258086 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:43:34.024534  258086 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:43:34.029083  258086 start.go:564] Will wait 60s for crictl version
	I1123 08:43:34.029145  258086 ssh_runner.go:195] Run: which crictl
	I1123 08:43:34.032799  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:43:34.057987  258086 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:43:34.058049  258086 ssh_runner.go:195] Run: containerd --version
	I1123 08:43:34.078381  258086 ssh_runner.go:195] Run: containerd --version
	I1123 08:43:34.100680  258086 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 08:43:36.163341  206485 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.069407293s)
	W1123 08:43:36.163379  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1123 08:43:36.163391  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:43:36.163401  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:36.196694  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:43:36.196725  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:36.230996  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:43:36.231018  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:36.266205  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:43:36.266235  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:43:34.101669  258086 cli_runner.go:164] Run: docker network inspect no-preload-999106 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:43:34.119192  258086 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:43:34.123375  258086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:43:34.134033  258086 kubeadm.go:884] updating cluster {Name:no-preload-999106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-999106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:43:34.134129  258086 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:43:34.134170  258086 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:43:34.159373  258086 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1123 08:43:34.159392  258086 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1123 08:43:34.159438  258086 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:34.159452  258086 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:34.159485  258086 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:34.159504  258086 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:34.159534  258086 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:34.159485  258086 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1123 08:43:34.159583  258086 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:34.159658  258086 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:34.161000  258086 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:34.161332  258086 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:34.161540  258086 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:34.161951  258086 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:34.162137  258086 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:34.162179  258086 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:34.162238  258086 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1123 08:43:34.162370  258086 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:34.303423  258086 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1123 08:43:34.303507  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:34.304294  258086 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115"
	I1123 08:43:34.304346  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:34.325396  258086 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1123 08:43:34.325443  258086 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:34.325489  258086 ssh_runner.go:195] Run: which crictl
	I1123 08:43:34.325396  258086 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1123 08:43:34.325524  258086 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:34.325560  258086 ssh_runner.go:195] Run: which crictl
	I1123 08:43:34.329408  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:34.330479  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:34.332092  258086 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1123 08:43:34.332130  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1123 08:43:34.334793  258086 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1123 08:43:34.334839  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:34.334892  258086 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1123 08:43:34.334947  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:34.359405  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:34.359448  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:34.359453  258086 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1123 08:43:34.359480  258086 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1123 08:43:34.359511  258086 ssh_runner.go:195] Run: which crictl
	I1123 08:43:34.359927  258086 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1123 08:43:34.359953  258086 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:34.359986  258086 ssh_runner.go:195] Run: which crictl
	I1123 08:43:34.362071  258086 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1123 08:43:34.362107  258086 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:34.362148  258086 ssh_runner.go:195] Run: which crictl
	I1123 08:43:34.386773  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:34.388038  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:34.388124  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1123 08:43:34.388148  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:34.388227  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:34.402862  258086 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1123 08:43:34.402936  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:34.406588  258086 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1123 08:43:34.406683  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:34.419900  258086 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1123 08:43:34.420019  258086 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1123 08:43:34.422632  258086 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1123 08:43:34.422820  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1123 08:43:34.422852  258086 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1123 08:43:34.422867  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:34.422905  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:34.432625  258086 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1123 08:43:34.432698  258086 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:34.432750  258086 ssh_runner.go:195] Run: which crictl
	I1123 08:43:34.435170  258086 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1123 08:43:34.435213  258086 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:34.435236  258086 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1123 08:43:34.435258  258086 ssh_runner.go:195] Run: which crictl
	I1123 08:43:34.435263  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1123 08:43:34.468602  258086 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1123 08:43:34.468621  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1123 08:43:34.468654  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1123 08:43:34.468703  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:34.468726  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:34.468757  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:34.468795  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:34.563471  258086 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1123 08:43:34.563530  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:34.563577  258086 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1123 08:43:34.563667  258086 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1123 08:43:34.563682  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:34.563581  258086 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1123 08:43:34.563706  258086 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1123 08:43:34.563755  258086 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 08:43:34.626877  258086 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1123 08:43:34.626895  258086 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1123 08:43:34.626913  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1123 08:43:34.626923  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1123 08:43:34.626927  258086 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1123 08:43:34.626943  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1123 08:43:34.626974  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:34.627042  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:34.685224  258086 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1123 08:43:34.685246  258086 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1123 08:43:34.685326  258086 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 08:43:34.685340  258086 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1123 08:43:34.700613  258086 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1123 08:43:34.700688  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1123 08:43:34.713376  258086 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1123 08:43:34.713409  258086 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1123 08:43:34.713407  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1123 08:43:34.713434  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1123 08:43:34.840943  258086 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1123 08:43:34.885583  258086 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1123 08:43:34.885674  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1123 08:43:35.489785  258086 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1123 08:43:35.489853  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:36.097868  258086 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.212165923s)
	I1123 08:43:36.097898  258086 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1123 08:43:36.097915  258086 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1123 08:43:36.097931  258086 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1123 08:43:36.097957  258086 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:36.097992  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1123 08:43:36.098005  258086 ssh_runner.go:195] Run: which crictl
	I1123 08:43:37.105043  258086 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.007027025s)
	I1123 08:43:37.105070  258086 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1123 08:43:37.105098  258086 ssh_runner.go:235] Completed: which crictl: (1.007074313s)
	I1123 08:43:37.105104  258086 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 08:43:37.105153  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:37.105159  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 08:43:37.133915  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:33.230087  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:33.729573  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:34.229556  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:34.729739  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:35.229458  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:35.729622  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:36.229768  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:36.730508  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:37.229765  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:37.729788  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:38.229952  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:38.730333  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:39.229833  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:39.729862  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:40.229901  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:40.729885  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:41.230479  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:41.730515  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:42.230247  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:42.326336  254114 kubeadm.go:1114] duration metric: took 11.676850942s to wait for elevateKubeSystemPrivileges
	I1123 08:43:42.326376  254114 kubeadm.go:403] duration metric: took 21.509472133s to StartCluster
	I1123 08:43:42.326398  254114 settings.go:142] acquiring lock: {Name:mk2c00a8b461754a49d5c7fd5af34c7d1005153a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:42.326470  254114 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-13876/kubeconfig
	I1123 08:43:42.328223  254114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/kubeconfig: {Name:mk636046b7146fd65b5638a6d549b76e61f7f055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:42.328482  254114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:43:42.328500  254114 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:43:42.328566  254114 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:43:42.328729  254114 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-204346"
	I1123 08:43:42.328754  254114 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-204346"
	I1123 08:43:42.328778  254114 config.go:182] Loaded profile config "old-k8s-version-204346": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 08:43:42.328793  254114 host.go:66] Checking if "old-k8s-version-204346" exists ...
	I1123 08:43:42.328837  254114 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-204346"
	I1123 08:43:42.328856  254114 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-204346"
	I1123 08:43:42.329183  254114 cli_runner.go:164] Run: docker container inspect old-k8s-version-204346 --format={{.State.Status}}
	I1123 08:43:42.329321  254114 cli_runner.go:164] Run: docker container inspect old-k8s-version-204346 --format={{.State.Status}}
	I1123 08:43:42.331021  254114 out.go:179] * Verifying Kubernetes components...
	I1123 08:43:42.332482  254114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:43:42.357866  254114 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:38.827550  206485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:43:38.827977  206485 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:43:38.828023  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:43:38.828070  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:43:38.854573  206485 cri.go:89] found id: "630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:38.854598  206485 cri.go:89] found id: "89f5abdf45afb9ff15a0744d6b71c9196e67d8f1e07dbde6c14130fa812cd030"
	I1123 08:43:38.854603  206485 cri.go:89] found id: "fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:38.854606  206485 cri.go:89] found id: ""
	I1123 08:43:38.854613  206485 logs.go:282] 3 containers: [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391 89f5abdf45afb9ff15a0744d6b71c9196e67d8f1e07dbde6c14130fa812cd030 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3]
	I1123 08:43:38.854688  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:38.858901  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:38.862744  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:38.866475  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1123 08:43:38.866533  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:43:38.892493  206485 cri.go:89] found id: "044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:38.892520  206485 cri.go:89] found id: ""
	I1123 08:43:38.892528  206485 logs.go:282] 1 containers: [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1]
	I1123 08:43:38.892575  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:38.896728  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1123 08:43:38.896790  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:43:38.923307  206485 cri.go:89] found id: ""
	I1123 08:43:38.923331  206485 logs.go:282] 0 containers: []
	W1123 08:43:38.923340  206485 logs.go:284] No container was found matching "coredns"
	I1123 08:43:38.923346  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:43:38.923392  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:43:38.949371  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:38.949396  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:38.949401  206485 cri.go:89] found id: ""
	I1123 08:43:38.949407  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:43:38.949452  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:38.953461  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:38.957266  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:43:38.957315  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:43:38.984054  206485 cri.go:89] found id: ""
	I1123 08:43:38.984077  206485 logs.go:282] 0 containers: []
	W1123 08:43:38.984084  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:43:38.984090  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:43:38.984144  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:43:39.014867  206485 cri.go:89] found id: "7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb"
	I1123 08:43:39.014894  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:39.014900  206485 cri.go:89] found id: ""
	I1123 08:43:39.014909  206485 logs.go:282] 2 containers: [7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:43:39.014988  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:39.019876  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:39.024471  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:43:39.024545  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:43:39.056343  206485 cri.go:89] found id: ""
	I1123 08:43:39.056370  206485 logs.go:282] 0 containers: []
	W1123 08:43:39.056382  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:43:39.056390  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:43:39.056447  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:43:39.087173  206485 cri.go:89] found id: ""
	I1123 08:43:39.087200  206485 logs.go:282] 0 containers: []
	W1123 08:43:39.087209  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:43:39.087218  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:43:39.087230  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:39.143340  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:43:39.143373  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:39.182502  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:43:39.182538  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:39.220490  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:43:39.220526  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:43:39.279713  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:43:39.279751  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:43:39.296632  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:43:39.296672  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:43:39.369445  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:43:39.369477  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:43:39.369493  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:39.412743  206485 logs.go:123] Gathering logs for kube-controller-manager [7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb] ...
	I1123 08:43:39.412782  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb"
	I1123 08:43:39.445988  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:43:39.446015  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:43:39.482074  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:43:39.482110  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:43:39.578994  206485 logs.go:123] Gathering logs for kube-apiserver [89f5abdf45afb9ff15a0744d6b71c9196e67d8f1e07dbde6c14130fa812cd030] ...
	I1123 08:43:39.579036  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89f5abdf45afb9ff15a0744d6b71c9196e67d8f1e07dbde6c14130fa812cd030"
	I1123 08:43:39.619624  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:43:39.619684  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:39.661136  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:43:39.661175  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:42.204267  206485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:43:42.204712  206485 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:43:42.204771  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:43:42.204826  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:43:42.232709  206485 cri.go:89] found id: "630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:42.232730  206485 cri.go:89] found id: "fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:42.232735  206485 cri.go:89] found id: ""
	I1123 08:43:42.232744  206485 logs.go:282] 2 containers: [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3]
	I1123 08:43:42.232799  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:42.236622  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:42.240968  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1123 08:43:42.241028  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:43:42.281849  206485 cri.go:89] found id: "044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:42.281877  206485 cri.go:89] found id: ""
	I1123 08:43:42.281885  206485 logs.go:282] 1 containers: [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1]
	I1123 08:43:42.281942  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:42.287991  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1123 08:43:42.288063  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:43:42.327625  206485 cri.go:89] found id: ""
	I1123 08:43:42.327669  206485 logs.go:282] 0 containers: []
	W1123 08:43:42.327679  206485 logs.go:284] No container was found matching "coredns"
	I1123 08:43:42.327687  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:43:42.327768  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:43:39.015203  258086 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.910026064s)
	I1123 08:43:39.015228  258086 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1123 08:43:39.015249  258086 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1123 08:43:39.015286  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1123 08:43:39.015301  258086 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.881356677s)
	I1123 08:43:39.015367  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:39.981839  258086 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1123 08:43:39.981862  258086 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1123 08:43:39.981901  258086 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 08:43:39.981948  258086 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1123 08:43:39.981955  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 08:43:39.985933  258086 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1123 08:43:39.985965  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1123 08:43:41.077380  258086 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.095406466s)
	I1123 08:43:41.077408  258086 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1123 08:43:41.077435  258086 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1123 08:43:41.077497  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1123 08:43:42.358205  254114 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-204346"
	I1123 08:43:42.358246  254114 host.go:66] Checking if "old-k8s-version-204346" exists ...
	I1123 08:43:42.358752  254114 cli_runner.go:164] Run: docker container inspect old-k8s-version-204346 --format={{.State.Status}}
	I1123 08:43:42.359206  254114 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:43:42.359225  254114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:43:42.359285  254114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-204346
	I1123 08:43:42.389614  254114 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:43:42.389635  254114 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:43:42.389707  254114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-204346
	I1123 08:43:42.391185  254114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/old-k8s-version-204346/id_rsa Username:docker}
	I1123 08:43:42.422459  254114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/old-k8s-version-204346/id_rsa Username:docker}
	I1123 08:43:42.449217  254114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:43:42.517611  254114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:43:42.534960  254114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:43:42.564953  254114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:43:42.780756  254114 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1123 08:43:42.781954  254114 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-204346" to be "Ready" ...
	I1123 08:43:43.034443  254114 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 08:43:43.035744  254114 addons.go:530] duration metric: took 707.164659ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:43:42.368955  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:42.368979  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:42.368985  206485 cri.go:89] found id: ""
	I1123 08:43:42.368996  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:43:42.370472  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:42.378043  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:42.388658  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:43:42.388749  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:43:42.429522  206485 cri.go:89] found id: ""
	I1123 08:43:42.429549  206485 logs.go:282] 0 containers: []
	W1123 08:43:42.429559  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:43:42.429566  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:43:42.429632  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:43:42.469043  206485 cri.go:89] found id: "5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:42.469070  206485 cri.go:89] found id: "7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb"
	I1123 08:43:42.469076  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:42.469081  206485 cri.go:89] found id: ""
	I1123 08:43:42.469089  206485 logs.go:282] 3 containers: [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b 7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:43:42.469144  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:42.475315  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:42.481874  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:42.488696  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:43:42.488921  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:43:42.533856  206485 cri.go:89] found id: ""
	I1123 08:43:42.533914  206485 logs.go:282] 0 containers: []
	W1123 08:43:42.533926  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:43:42.533934  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:43:42.534029  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:43:42.577521  206485 cri.go:89] found id: ""
	I1123 08:43:42.577543  206485 logs.go:282] 0 containers: []
	W1123 08:43:42.577550  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:43:42.577559  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:43:42.577568  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:43:42.665576  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:43:42.665601  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:43:42.665622  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:42.723908  206485 logs.go:123] Gathering logs for kube-controller-manager [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b] ...
	I1123 08:43:42.723945  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:42.766588  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:43:42.766618  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:42.815960  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:43:42.816050  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:43:42.836362  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:43:42.836393  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:42.883211  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:43:42.883249  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:42.925983  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:43:42.926057  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:43.002532  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:43:43.002565  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:43.048891  206485 logs.go:123] Gathering logs for kube-controller-manager [7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb] ...
	I1123 08:43:43.048923  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb"
	I1123 08:43:43.080573  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:43:43.080606  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:43:43.145471  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:43:43.145510  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:43:43.182994  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:43:43.183035  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:43:45.803715  206485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:43:45.804092  206485 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:43:45.804151  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:43:45.804211  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:43:45.842142  206485 cri.go:89] found id: "630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:45.842161  206485 cri.go:89] found id: "fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:45.842165  206485 cri.go:89] found id: ""
	I1123 08:43:45.842172  206485 logs.go:282] 2 containers: [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3]
	I1123 08:43:45.842223  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:45.846225  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:45.850730  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1123 08:43:45.850797  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:43:45.879479  206485 cri.go:89] found id: "044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:45.879506  206485 cri.go:89] found id: ""
	I1123 08:43:45.879515  206485 logs.go:282] 1 containers: [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1]
	I1123 08:43:45.879576  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:45.884738  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1123 08:43:45.884801  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:43:45.916040  206485 cri.go:89] found id: ""
	I1123 08:43:45.916069  206485 logs.go:282] 0 containers: []
	W1123 08:43:45.916080  206485 logs.go:284] No container was found matching "coredns"
	I1123 08:43:45.916088  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:43:45.916155  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:43:45.947206  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:45.947237  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:45.947242  206485 cri.go:89] found id: ""
	I1123 08:43:45.947252  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:43:45.947308  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:45.952246  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:45.956172  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:43:45.956233  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:43:45.986919  206485 cri.go:89] found id: ""
	I1123 08:43:45.986945  206485 logs.go:282] 0 containers: []
	W1123 08:43:45.986956  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:43:45.986964  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:43:45.987017  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:43:46.019241  206485 cri.go:89] found id: "5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:46.019269  206485 cri.go:89] found id: "7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb"
	I1123 08:43:46.019273  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:46.019278  206485 cri.go:89] found id: ""
	I1123 08:43:46.019286  206485 logs.go:282] 3 containers: [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b 7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:43:46.019345  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:46.024190  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:46.028847  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:46.033363  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:43:46.033436  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:43:46.067781  206485 cri.go:89] found id: ""
	I1123 08:43:46.067808  206485 logs.go:282] 0 containers: []
	W1123 08:43:46.067819  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:43:46.067827  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:43:46.067885  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:43:46.100053  206485 cri.go:89] found id: ""
	I1123 08:43:46.100084  206485 logs.go:282] 0 containers: []
	W1123 08:43:46.100094  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:43:46.100107  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:43:46.100122  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:46.146426  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:43:46.146456  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:43:46.208332  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:43:46.208375  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:43:46.247193  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:43:46.247229  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:43:46.264714  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:43:46.264742  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:43:46.336341  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:43:46.336363  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:43:46.336376  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:46.379827  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:43:46.379866  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:46.425899  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:43:46.425925  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:46.491769  206485 logs.go:123] Gathering logs for kube-controller-manager [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b] ...
	I1123 08:43:46.491805  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:46.523775  206485 logs.go:123] Gathering logs for kube-controller-manager [7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb] ...
	I1123 08:43:46.523805  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb"
	I1123 08:43:46.555025  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:43:46.555060  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:46.592667  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:43:46.592709  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:43:46.691047  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:43:46.691081  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:43.958800  258086 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (2.881269634s)
	I1123 08:43:43.958835  258086 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1123 08:43:43.958864  258086 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1123 08:43:43.958908  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1123 08:43:44.336453  258086 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1123 08:43:44.336514  258086 cache_images.go:125] Successfully loaded all cached images
	I1123 08:43:44.336522  258086 cache_images.go:94] duration metric: took 10.177118s to LoadCachedImages
	I1123 08:43:44.336535  258086 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1123 08:43:44.336675  258086 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-999106 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-999106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:43:44.336740  258086 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:43:44.361999  258086 cni.go:84] Creating CNI manager for ""
	I1123 08:43:44.362021  258086 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:43:44.362037  258086 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:43:44.362060  258086 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-999106 NodeName:no-preload-999106 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:43:44.362197  258086 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-999106"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:43:44.362266  258086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:43:44.371147  258086 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1123 08:43:44.371205  258086 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1123 08:43:44.379477  258086 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1123 08:43:44.379559  258086 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1123 08:43:44.379560  258086 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1123 08:43:44.379590  258086 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1123 08:43:44.384906  258086 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1123 08:43:44.384935  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1123 08:43:45.307760  258086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:43:45.321272  258086 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1123 08:43:45.325776  258086 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1123 08:43:45.325807  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1123 08:43:45.440984  258086 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1123 08:43:45.448490  258086 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1123 08:43:45.448546  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1123 08:43:45.718942  258086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:43:45.729752  258086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1123 08:43:45.746904  258086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:43:45.764606  258086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1123 08:43:45.779438  258086 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:43:45.783637  258086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:43:45.795787  258086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:43:45.901866  258086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:43:45.931680  258086 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106 for IP: 192.168.85.2
	I1123 08:43:45.931702  258086 certs.go:195] generating shared ca certs ...
	I1123 08:43:45.931722  258086 certs.go:227] acquiring lock for ca certs: {Name:mk376e2c25eb30d8b09b93cb4624441e819bcc8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:45.931883  258086 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-13876/.minikube/ca.key
	I1123 08:43:45.931922  258086 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-13876/.minikube/proxy-client-ca.key
	I1123 08:43:45.931931  258086 certs.go:257] generating profile certs ...
	I1123 08:43:45.932023  258086 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/client.key
	I1123 08:43:45.932046  258086 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/client.crt with IP's: []
	I1123 08:43:46.076820  258086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/client.crt ...
	I1123 08:43:46.076852  258086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/client.crt: {Name:mk264e21cffc1d235a0a5153e1f533874608a488 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:46.077062  258086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/client.key ...
	I1123 08:43:46.077094  258086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/client.key: {Name:mk09f5a31cd584eb4ea102a803f662bacda0e612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:46.077204  258086 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.key.ff765c4c
	I1123 08:43:46.077226  258086 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.crt.ff765c4c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 08:43:46.147038  258086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.crt.ff765c4c ...
	I1123 08:43:46.147076  258086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.crt.ff765c4c: {Name:mk2b60ecfaddc28f6e9e91bd0ff2b48be7ad7023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:46.147257  258086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.key.ff765c4c ...
	I1123 08:43:46.147277  258086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.key.ff765c4c: {Name:mk8ce7b23d7c04fba7d8d30f580f5ae25a8eaa1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:46.147393  258086 certs.go:382] copying /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.crt.ff765c4c -> /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.crt
	I1123 08:43:46.147504  258086 certs.go:386] copying /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.key.ff765c4c -> /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.key
	I1123 08:43:46.147597  258086 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/proxy-client.key
	I1123 08:43:46.147614  258086 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/proxy-client.crt with IP's: []
	I1123 08:43:46.188254  258086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/proxy-client.crt ...
	I1123 08:43:46.188285  258086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/proxy-client.crt: {Name:mkce831c55c8c6f96bdb743bd92d80212f28ceec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:46.188486  258086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/proxy-client.key ...
	I1123 08:43:46.188506  258086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/proxy-client.key: {Name:mk2b9a4c76ac3acf445fdcb1e14850de2c1a5507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:46.188762  258086 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/17442.pem (1338 bytes)
	W1123 08:43:46.188820  258086 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-13876/.minikube/certs/17442_empty.pem, impossibly tiny 0 bytes
	I1123 08:43:46.188836  258086 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:43:46.188874  258086 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:43:46.188907  258086 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:43:46.188942  258086 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/key.pem (1675 bytes)
	I1123 08:43:46.189009  258086 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/files/etc/ssl/certs/174422.pem (1708 bytes)
	I1123 08:43:46.189889  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:43:46.212738  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:43:46.235727  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:43:46.259309  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:43:46.282164  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 08:43:46.305443  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:43:46.328998  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:43:46.351947  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:43:46.375511  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:43:46.401909  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/certs/17442.pem --> /usr/share/ca-certificates/17442.pem (1338 bytes)
	I1123 08:43:46.424180  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/files/etc/ssl/certs/174422.pem --> /usr/share/ca-certificates/174422.pem (1708 bytes)
	I1123 08:43:46.445575  258086 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:43:46.461580  258086 ssh_runner.go:195] Run: openssl version
	I1123 08:43:46.468524  258086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:43:46.477534  258086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:43:46.482510  258086 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:11 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:43:46.482577  258086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:43:46.523991  258086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:43:46.535125  258086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17442.pem && ln -fs /usr/share/ca-certificates/17442.pem /etc/ssl/certs/17442.pem"
	I1123 08:43:46.546052  258086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17442.pem
	I1123 08:43:46.552569  258086 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:16 /usr/share/ca-certificates/17442.pem
	I1123 08:43:46.552702  258086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17442.pem
	I1123 08:43:46.600806  258086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17442.pem /etc/ssl/certs/51391683.0"
	I1123 08:43:46.610524  258086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/174422.pem && ln -fs /usr/share/ca-certificates/174422.pem /etc/ssl/certs/174422.pem"
	I1123 08:43:46.621451  258086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/174422.pem
	I1123 08:43:46.625905  258086 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:16 /usr/share/ca-certificates/174422.pem
	I1123 08:43:46.625966  258086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/174422.pem
	I1123 08:43:46.663055  258086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/174422.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:43:46.672614  258086 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:43:46.676799  258086 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:43:46.676865  258086 kubeadm.go:401] StartCluster: {Name:no-preload-999106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-999106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:43:46.676948  258086 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:43:46.677027  258086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:43:46.706515  258086 cri.go:89] found id: ""
	I1123 08:43:46.706599  258086 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:43:46.715791  258086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:43:46.725599  258086 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:43:46.725695  258086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:43:46.734727  258086 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:43:46.734752  258086 kubeadm.go:158] found existing configuration files:
	
	I1123 08:43:46.734794  258086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:43:46.743841  258086 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:43:46.743892  258086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:43:46.752521  258086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:43:46.761347  258086 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:43:46.761400  258086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:43:46.769196  258086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:43:46.777174  258086 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:43:46.777227  258086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:43:46.784869  258086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:43:46.793707  258086 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:43:46.793768  258086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:43:46.801586  258086 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:43:46.858285  258086 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 08:43:46.916186  258086 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:43:43.286172  254114 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-204346" context rescaled to 1 replicas
	W1123 08:43:44.785588  254114 node_ready.go:57] node "old-k8s-version-204346" has "Ready":"False" status (will retry)
	W1123 08:43:46.785746  254114 node_ready.go:57] node "old-k8s-version-204346" has "Ready":"False" status (will retry)
	I1123 08:43:49.228668  206485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:43:49.229070  206485 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:43:49.229121  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:43:49.229170  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:43:49.256973  206485 cri.go:89] found id: "630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:49.256994  206485 cri.go:89] found id: "fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:49.257000  206485 cri.go:89] found id: ""
	I1123 08:43:49.257008  206485 logs.go:282] 2 containers: [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3]
	I1123 08:43:49.257070  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:49.261237  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:49.264766  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1123 08:43:49.264830  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:43:49.290113  206485 cri.go:89] found id: "044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:49.290135  206485 cri.go:89] found id: ""
	I1123 08:43:49.290145  206485 logs.go:282] 1 containers: [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1]
	I1123 08:43:49.290199  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:49.293989  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1123 08:43:49.294053  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:43:49.320161  206485 cri.go:89] found id: ""
	I1123 08:43:49.320191  206485 logs.go:282] 0 containers: []
	W1123 08:43:49.320202  206485 logs.go:284] No container was found matching "coredns"
	I1123 08:43:49.320210  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:43:49.320264  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:43:49.347363  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:49.347384  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:49.347391  206485 cri.go:89] found id: ""
	I1123 08:43:49.347407  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:43:49.347464  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:49.351525  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:49.355374  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:43:49.355433  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:43:49.382984  206485 cri.go:89] found id: ""
	I1123 08:43:49.383010  206485 logs.go:282] 0 containers: []
	W1123 08:43:49.383020  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:43:49.383028  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:43:49.383086  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:43:49.409377  206485 cri.go:89] found id: "5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:49.409402  206485 cri.go:89] found id: "7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb"
	I1123 08:43:49.409408  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:49.409413  206485 cri.go:89] found id: ""
	I1123 08:43:49.409421  206485 logs.go:282] 3 containers: [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b 7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:43:49.409468  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:49.413850  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:49.417701  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:49.421307  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:43:49.421373  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:43:49.447409  206485 cri.go:89] found id: ""
	I1123 08:43:49.447433  206485 logs.go:282] 0 containers: []
	W1123 08:43:49.447444  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:43:49.447451  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:43:49.447512  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:43:49.474526  206485 cri.go:89] found id: ""
	I1123 08:43:49.474554  206485 logs.go:282] 0 containers: []
	W1123 08:43:49.474562  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:43:49.474572  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:43:49.474580  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:43:49.566947  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:43:49.566990  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:43:49.581192  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:43:49.581218  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:43:49.640574  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:43:49.640596  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:43:49.640610  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:49.676070  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:43:49.676097  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:49.710524  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:43:49.710555  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:49.785389  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:43:49.785422  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:49.819651  206485 logs.go:123] Gathering logs for kube-controller-manager [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b] ...
	I1123 08:43:49.819677  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:49.847192  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:43:49.847216  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:49.878622  206485 logs.go:123] Gathering logs for kube-controller-manager [7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb] ...
	I1123 08:43:49.878674  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb"
	I1123 08:43:49.904924  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:43:49.904958  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:49.937225  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:43:49.937252  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:43:49.987441  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:43:49.987483  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1123 08:43:49.285708  254114 node_ready.go:57] node "old-k8s-version-204346" has "Ready":"False" status (will retry)
	W1123 08:43:51.285827  254114 node_ready.go:57] node "old-k8s-version-204346" has "Ready":"False" status (will retry)
	I1123 08:43:56.990600  258086 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:43:56.990724  258086 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:43:56.990889  258086 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:43:56.990976  258086 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 08:43:56.991027  258086 kubeadm.go:319] OS: Linux
	I1123 08:43:56.991098  258086 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:43:56.991170  258086 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:43:56.991327  258086 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:43:56.991401  258086 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:43:56.991513  258086 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:43:56.991594  258086 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:43:56.991696  258086 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:43:56.991760  258086 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 08:43:56.991928  258086 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:43:56.992079  258086 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:43:56.992203  258086 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:43:56.992277  258086 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:43:56.993629  258086 out.go:252]   - Generating certificates and keys ...
	I1123 08:43:56.993773  258086 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:43:56.993882  258086 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:43:56.993978  258086 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:43:56.994054  258086 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:43:56.994139  258086 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:43:56.994210  258086 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:43:56.994287  258086 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:43:56.994448  258086 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-999106] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:43:56.994523  258086 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:43:56.994701  258086 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-999106] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:43:56.994808  258086 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:43:56.994907  258086 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:43:56.994974  258086 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:43:56.995052  258086 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:43:56.995136  258086 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:43:56.995230  258086 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:43:56.995314  258086 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:43:56.995407  258086 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:43:56.995507  258086 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:43:56.995596  258086 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:43:56.995670  258086 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:43:56.998197  258086 out.go:252]   - Booting up control plane ...
	I1123 08:43:56.998282  258086 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:43:56.998367  258086 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:43:56.998479  258086 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:43:56.998614  258086 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:43:56.998760  258086 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:43:56.998861  258086 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:43:56.998949  258086 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:43:56.998984  258086 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:43:56.999108  258086 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:43:56.999224  258086 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:43:56.999284  258086 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.962401ms
	I1123 08:43:56.999376  258086 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:43:56.999453  258086 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1123 08:43:56.999531  258086 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:43:56.999598  258086 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:43:56.999680  258086 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.69972236s
	I1123 08:43:56.999756  258086 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.979262438s
	I1123 08:43:56.999857  258086 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502236354s
	I1123 08:43:56.999983  258086 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:43:57.000181  258086 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:43:57.000269  258086 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:43:57.000528  258086 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-999106 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:43:57.000596  258086 kubeadm.go:319] [bootstrap-token] Using token: augmq1.wtvrtjusohbhz9fp
	I1123 08:43:57.002234  258086 out.go:252]   - Configuring RBAC rules ...
	I1123 08:43:57.002330  258086 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:43:57.002408  258086 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:43:57.002539  258086 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:43:57.002709  258086 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:43:57.002823  258086 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:43:57.002898  258086 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:43:57.003040  258086 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:43:57.003091  258086 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:43:57.003157  258086 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:43:57.003173  258086 kubeadm.go:319] 
	I1123 08:43:57.003224  258086 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:43:57.003229  258086 kubeadm.go:319] 
	I1123 08:43:57.003293  258086 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:43:57.003299  258086 kubeadm.go:319] 
	I1123 08:43:57.003325  258086 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:43:57.003380  258086 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:43:57.003424  258086 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:43:57.003429  258086 kubeadm.go:319] 
	I1123 08:43:57.003474  258086 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:43:57.003483  258086 kubeadm.go:319] 
	I1123 08:43:57.003523  258086 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:43:57.003529  258086 kubeadm.go:319] 
	I1123 08:43:57.003586  258086 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:43:57.003674  258086 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:43:57.003774  258086 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:43:57.003795  258086 kubeadm.go:319] 
	I1123 08:43:57.003914  258086 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:43:57.004021  258086 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:43:57.004031  258086 kubeadm.go:319] 
	I1123 08:43:57.004153  258086 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token augmq1.wtvrtjusohbhz9fp \
	I1123 08:43:57.004275  258086 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5c48a3b11504a9c7a5d242d913eadf6a5354a8cb06c9ffcf8385d22efb04d8fa \
	I1123 08:43:57.004298  258086 kubeadm.go:319] 	--control-plane 
	I1123 08:43:57.004302  258086 kubeadm.go:319] 
	I1123 08:43:57.004373  258086 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:43:57.004379  258086 kubeadm.go:319] 
	I1123 08:43:57.004452  258086 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token augmq1.wtvrtjusohbhz9fp \
	I1123 08:43:57.004563  258086 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5c48a3b11504a9c7a5d242d913eadf6a5354a8cb06c9ffcf8385d22efb04d8fa 
	I1123 08:43:57.004575  258086 cni.go:84] Creating CNI manager for ""
	I1123 08:43:57.004581  258086 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:43:57.007194  258086 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:43:52.520061  206485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:43:52.520694  206485 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:43:52.520747  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:43:52.520799  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:43:52.553943  206485 cri.go:89] found id: "630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:52.553969  206485 cri.go:89] found id: "fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:52.553975  206485 cri.go:89] found id: ""
	I1123 08:43:52.553983  206485 logs.go:282] 2 containers: [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3]
	I1123 08:43:52.554042  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:52.559842  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:52.565197  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1123 08:43:52.565266  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:43:52.601499  206485 cri.go:89] found id: "044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:52.601529  206485 cri.go:89] found id: ""
	I1123 08:43:52.601568  206485 logs.go:282] 1 containers: [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1]
	I1123 08:43:52.601621  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:52.606848  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1123 08:43:52.606925  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:43:52.645028  206485 cri.go:89] found id: ""
	I1123 08:43:52.645061  206485 logs.go:282] 0 containers: []
	W1123 08:43:52.645072  206485 logs.go:284] No container was found matching "coredns"
	I1123 08:43:52.645079  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:43:52.645139  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:43:52.681457  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:52.681484  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:52.681490  206485 cri.go:89] found id: ""
	I1123 08:43:52.681499  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:43:52.681557  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:52.686548  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:52.690588  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:43:52.690682  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:43:52.723180  206485 cri.go:89] found id: ""
	I1123 08:43:52.723208  206485 logs.go:282] 0 containers: []
	W1123 08:43:52.723217  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:43:52.723224  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:43:52.723287  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:43:52.756887  206485 cri.go:89] found id: "5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:52.756911  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:52.756921  206485 cri.go:89] found id: ""
	I1123 08:43:52.756929  206485 logs.go:282] 2 containers: [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:43:52.756985  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:52.761180  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:52.765188  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:43:52.765247  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:43:52.795290  206485 cri.go:89] found id: ""
	I1123 08:43:52.795319  206485 logs.go:282] 0 containers: []
	W1123 08:43:52.795329  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:43:52.795336  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:43:52.795395  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:43:52.822978  206485 cri.go:89] found id: ""
	I1123 08:43:52.823006  206485 logs.go:282] 0 containers: []
	W1123 08:43:52.823013  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:43:52.823022  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:43:52.823034  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:52.859205  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:43:52.859240  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:43:52.910295  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:43:52.910334  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:52.948004  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:43:52.948045  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:52.982700  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:43:52.982734  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:53.055592  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:43:53.055634  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:53.097286  206485 logs.go:123] Gathering logs for kube-controller-manager [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b] ...
	I1123 08:43:53.097327  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:53.133102  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:43:53.133146  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:43:53.170688  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:43:53.170722  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:43:53.281419  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:43:53.281464  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:43:53.298748  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:43:53.298777  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:43:53.373016  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:43:53.373040  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:43:53.373054  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:55.914776  206485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:43:55.915250  206485 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:43:55.915303  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:43:55.915351  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:43:55.943544  206485 cri.go:89] found id: "630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:55.943567  206485 cri.go:89] found id: "fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:55.943572  206485 cri.go:89] found id: ""
	I1123 08:43:55.943579  206485 logs.go:282] 2 containers: [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3]
	I1123 08:43:55.943622  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:55.948391  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:55.952924  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1123 08:43:55.952992  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:43:55.981407  206485 cri.go:89] found id: "044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:55.981431  206485 cri.go:89] found id: ""
	I1123 08:43:55.981441  206485 logs.go:282] 1 containers: [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1]
	I1123 08:43:55.981501  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:55.986304  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1123 08:43:55.986378  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:43:56.014167  206485 cri.go:89] found id: ""
	I1123 08:43:56.014192  206485 logs.go:282] 0 containers: []
	W1123 08:43:56.014200  206485 logs.go:284] No container was found matching "coredns"
	I1123 08:43:56.014206  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:43:56.014262  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:43:56.050121  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:56.050153  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:56.050160  206485 cri.go:89] found id: ""
	I1123 08:43:56.050170  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:43:56.050236  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:56.055306  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:56.059507  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:43:56.059586  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:43:56.092810  206485 cri.go:89] found id: ""
	I1123 08:43:56.092843  206485 logs.go:282] 0 containers: []
	W1123 08:43:56.092856  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:43:56.092864  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:43:56.092931  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:43:56.126845  206485 cri.go:89] found id: "5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:56.126869  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:56.126874  206485 cri.go:89] found id: ""
	I1123 08:43:56.126884  206485 logs.go:282] 2 containers: [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:43:56.126939  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:56.131943  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:56.135880  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:43:56.135945  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:43:56.163669  206485 cri.go:89] found id: ""
	I1123 08:43:56.163696  206485 logs.go:282] 0 containers: []
	W1123 08:43:56.163707  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:43:56.163714  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:43:56.163773  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:43:56.197602  206485 cri.go:89] found id: ""
	I1123 08:43:56.197638  206485 logs.go:282] 0 containers: []
	W1123 08:43:56.197660  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:43:56.197672  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:43:56.197689  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:56.238940  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:43:56.238981  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:56.288636  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:43:56.288691  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:56.324266  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:43:56.324299  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:43:56.378458  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:43:56.378498  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:43:56.417284  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:43:56.417313  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:43:56.509149  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:43:56.509182  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:43:56.523057  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:43:56.523082  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:43:56.583048  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:43:56.583074  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:43:56.583095  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:56.618320  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:43:56.618358  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:56.651682  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:43:56.651713  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:56.709657  206485 logs.go:123] Gathering logs for kube-controller-manager [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b] ...
	I1123 08:43:56.709694  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:57.008714  258086 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:43:57.013402  258086 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:43:57.013443  258086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:43:57.028881  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:43:57.253419  258086 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:43:57.253530  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:57.253599  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-999106 minikube.k8s.io/updated_at=2025_11_23T08_43_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=no-preload-999106 minikube.k8s.io/primary=true
	I1123 08:43:57.264168  258086 ops.go:34] apiserver oom_adj: -16
	I1123 08:43:57.330032  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1123 08:43:53.286319  254114 node_ready.go:57] node "old-k8s-version-204346" has "Ready":"False" status (will retry)
	W1123 08:43:55.786003  254114 node_ready.go:57] node "old-k8s-version-204346" has "Ready":"False" status (will retry)
	I1123 08:43:57.285411  254114 node_ready.go:49] node "old-k8s-version-204346" is "Ready"
	I1123 08:43:57.285445  254114 node_ready.go:38] duration metric: took 14.503433565s for node "old-k8s-version-204346" to be "Ready" ...
	I1123 08:43:57.285462  254114 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:43:57.285564  254114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:43:57.301686  254114 api_server.go:72] duration metric: took 14.973147695s to wait for apiserver process to appear ...
	I1123 08:43:57.301718  254114 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:43:57.301742  254114 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 08:43:57.306545  254114 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 08:43:57.308093  254114 api_server.go:141] control plane version: v1.28.0
	I1123 08:43:57.308124  254114 api_server.go:131] duration metric: took 6.398178ms to wait for apiserver health ...
	I1123 08:43:57.308135  254114 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:43:57.312486  254114 system_pods.go:59] 8 kube-system pods found
	I1123 08:43:57.312519  254114 system_pods.go:61] "coredns-5dd5756b68-2fdsv" [1c71e052-b3c2-4875-8aeb-7d724ee26e06] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:43:57.312525  254114 system_pods.go:61] "etcd-old-k8s-version-204346" [58cc20a4-23f1-4a5a-ba0a-03fadfc6df09] Running
	I1123 08:43:57.312530  254114 system_pods.go:61] "kindnet-q8xnm" [c3178adf-8eb3-4210-9674-fdda89d3317d] Running
	I1123 08:43:57.312539  254114 system_pods.go:61] "kube-apiserver-old-k8s-version-204346" [e63e828c-37a0-48ab-9413-932b3cde09cc] Running
	I1123 08:43:57.312542  254114 system_pods.go:61] "kube-controller-manager-old-k8s-version-204346" [bbaefdad-f8f3-4264-a467-5f75937de2a0] Running
	I1123 08:43:57.312546  254114 system_pods.go:61] "kube-proxy-tzq9b" [5d122719-2577-438f-bae7-72a1034f88ef] Running
	I1123 08:43:57.312548  254114 system_pods.go:61] "kube-scheduler-old-k8s-version-204346" [773bcc91-2553-4606-91ab-f32ec0ba3738] Running
	I1123 08:43:57.312553  254114 system_pods.go:61] "storage-provisioner" [372382d8-d23f-4e6d-89ae-8f2c9c46b6dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:43:57.312559  254114 system_pods.go:74] duration metric: took 4.418082ms to wait for pod list to return data ...
	I1123 08:43:57.312566  254114 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:43:57.315607  254114 default_sa.go:45] found service account: "default"
	I1123 08:43:57.315634  254114 default_sa.go:55] duration metric: took 3.061615ms for default service account to be created ...
	I1123 08:43:57.315674  254114 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:43:57.320602  254114 system_pods.go:86] 8 kube-system pods found
	I1123 08:43:57.320629  254114 system_pods.go:89] "coredns-5dd5756b68-2fdsv" [1c71e052-b3c2-4875-8aeb-7d724ee26e06] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:43:57.320634  254114 system_pods.go:89] "etcd-old-k8s-version-204346" [58cc20a4-23f1-4a5a-ba0a-03fadfc6df09] Running
	I1123 08:43:57.320639  254114 system_pods.go:89] "kindnet-q8xnm" [c3178adf-8eb3-4210-9674-fdda89d3317d] Running
	I1123 08:43:57.320657  254114 system_pods.go:89] "kube-apiserver-old-k8s-version-204346" [e63e828c-37a0-48ab-9413-932b3cde09cc] Running
	I1123 08:43:57.320663  254114 system_pods.go:89] "kube-controller-manager-old-k8s-version-204346" [bbaefdad-f8f3-4264-a467-5f75937de2a0] Running
	I1123 08:43:57.320668  254114 system_pods.go:89] "kube-proxy-tzq9b" [5d122719-2577-438f-bae7-72a1034f88ef] Running
	I1123 08:43:57.320673  254114 system_pods.go:89] "kube-scheduler-old-k8s-version-204346" [773bcc91-2553-4606-91ab-f32ec0ba3738] Running
	I1123 08:43:57.320679  254114 system_pods.go:89] "storage-provisioner" [372382d8-d23f-4e6d-89ae-8f2c9c46b6dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:43:57.320708  254114 retry.go:31] will retry after 281.398987ms: missing components: kube-dns
	I1123 08:43:57.607881  254114 system_pods.go:86] 8 kube-system pods found
	I1123 08:43:57.607919  254114 system_pods.go:89] "coredns-5dd5756b68-2fdsv" [1c71e052-b3c2-4875-8aeb-7d724ee26e06] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:43:57.607927  254114 system_pods.go:89] "etcd-old-k8s-version-204346" [58cc20a4-23f1-4a5a-ba0a-03fadfc6df09] Running
	I1123 08:43:57.607936  254114 system_pods.go:89] "kindnet-q8xnm" [c3178adf-8eb3-4210-9674-fdda89d3317d] Running
	I1123 08:43:57.607942  254114 system_pods.go:89] "kube-apiserver-old-k8s-version-204346" [e63e828c-37a0-48ab-9413-932b3cde09cc] Running
	I1123 08:43:57.607948  254114 system_pods.go:89] "kube-controller-manager-old-k8s-version-204346" [bbaefdad-f8f3-4264-a467-5f75937de2a0] Running
	I1123 08:43:57.607952  254114 system_pods.go:89] "kube-proxy-tzq9b" [5d122719-2577-438f-bae7-72a1034f88ef] Running
	I1123 08:43:57.607957  254114 system_pods.go:89] "kube-scheduler-old-k8s-version-204346" [773bcc91-2553-4606-91ab-f32ec0ba3738] Running
	I1123 08:43:57.607964  254114 system_pods.go:89] "storage-provisioner" [372382d8-d23f-4e6d-89ae-8f2c9c46b6dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:43:57.607991  254114 retry.go:31] will retry after 389.750642ms: missing components: kube-dns
	I1123 08:43:58.002207  254114 system_pods.go:86] 8 kube-system pods found
	I1123 08:43:58.002234  254114 system_pods.go:89] "coredns-5dd5756b68-2fdsv" [1c71e052-b3c2-4875-8aeb-7d724ee26e06] Running
	I1123 08:43:58.002240  254114 system_pods.go:89] "etcd-old-k8s-version-204346" [58cc20a4-23f1-4a5a-ba0a-03fadfc6df09] Running
	I1123 08:43:58.002249  254114 system_pods.go:89] "kindnet-q8xnm" [c3178adf-8eb3-4210-9674-fdda89d3317d] Running
	I1123 08:43:58.002253  254114 system_pods.go:89] "kube-apiserver-old-k8s-version-204346" [e63e828c-37a0-48ab-9413-932b3cde09cc] Running
	I1123 08:43:58.002257  254114 system_pods.go:89] "kube-controller-manager-old-k8s-version-204346" [bbaefdad-f8f3-4264-a467-5f75937de2a0] Running
	I1123 08:43:58.002261  254114 system_pods.go:89] "kube-proxy-tzq9b" [5d122719-2577-438f-bae7-72a1034f88ef] Running
	I1123 08:43:58.002264  254114 system_pods.go:89] "kube-scheduler-old-k8s-version-204346" [773bcc91-2553-4606-91ab-f32ec0ba3738] Running
	I1123 08:43:58.002267  254114 system_pods.go:89] "storage-provisioner" [372382d8-d23f-4e6d-89ae-8f2c9c46b6dc] Running
	I1123 08:43:58.002275  254114 system_pods.go:126] duration metric: took 686.59398ms to wait for k8s-apps to be running ...
	I1123 08:43:58.002285  254114 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:43:58.002331  254114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:43:58.016798  254114 system_svc.go:56] duration metric: took 14.504815ms WaitForService to wait for kubelet
	I1123 08:43:58.016829  254114 kubeadm.go:587] duration metric: took 15.688298138s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:43:58.016854  254114 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:43:58.021952  254114 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:43:58.021983  254114 node_conditions.go:123] node cpu capacity is 8
	I1123 08:43:58.022010  254114 node_conditions.go:105] duration metric: took 5.146561ms to run NodePressure ...
	I1123 08:43:58.022026  254114 start.go:242] waiting for startup goroutines ...
	I1123 08:43:58.022040  254114 start.go:247] waiting for cluster config update ...
	I1123 08:43:58.022056  254114 start.go:256] writing updated cluster config ...
	I1123 08:43:58.022354  254114 ssh_runner.go:195] Run: rm -f paused
	I1123 08:43:58.026482  254114 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:43:58.030783  254114 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-2fdsv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:58.035326  254114 pod_ready.go:94] pod "coredns-5dd5756b68-2fdsv" is "Ready"
	I1123 08:43:58.035351  254114 pod_ready.go:86] duration metric: took 4.542747ms for pod "coredns-5dd5756b68-2fdsv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:58.038155  254114 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-204346" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:58.042389  254114 pod_ready.go:94] pod "etcd-old-k8s-version-204346" is "Ready"
	I1123 08:43:58.042413  254114 pod_ready.go:86] duration metric: took 4.236026ms for pod "etcd-old-k8s-version-204346" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:58.045530  254114 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-204346" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:58.049686  254114 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-204346" is "Ready"
	I1123 08:43:58.049708  254114 pod_ready.go:86] duration metric: took 4.151976ms for pod "kube-apiserver-old-k8s-version-204346" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:58.052167  254114 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-204346" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:58.430619  254114 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-204346" is "Ready"
	I1123 08:43:58.430662  254114 pod_ready.go:86] duration metric: took 378.478321ms for pod "kube-controller-manager-old-k8s-version-204346" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:58.631434  254114 pod_ready.go:83] waiting for pod "kube-proxy-tzq9b" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:59.030458  254114 pod_ready.go:94] pod "kube-proxy-tzq9b" is "Ready"
	I1123 08:43:59.030484  254114 pod_ready.go:86] duration metric: took 399.024693ms for pod "kube-proxy-tzq9b" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:59.231371  254114 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-204346" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:59.630789  254114 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-204346" is "Ready"
	I1123 08:43:59.630824  254114 pod_ready.go:86] duration metric: took 399.424476ms for pod "kube-scheduler-old-k8s-version-204346" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:59.630840  254114 pod_ready.go:40] duration metric: took 1.604329749s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:43:59.682106  254114 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1123 08:43:59.683780  254114 out.go:203] 
	W1123 08:43:59.685129  254114 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 08:43:59.686407  254114 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 08:43:59.689781  254114 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-204346" cluster and "default" namespace by default
	I1123 08:43:59.237742  206485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:43:59.238210  206485 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:43:59.238271  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:43:59.238328  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:43:59.266168  206485 cri.go:89] found id: "630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:59.266191  206485 cri.go:89] found id: "fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:59.266197  206485 cri.go:89] found id: ""
	I1123 08:43:59.266205  206485 logs.go:282] 2 containers: [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3]
	I1123 08:43:59.266261  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:59.270518  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:59.274380  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1123 08:43:59.274439  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:43:59.301514  206485 cri.go:89] found id: "044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:59.301542  206485 cri.go:89] found id: ""
	I1123 08:43:59.301552  206485 logs.go:282] 1 containers: [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1]
	I1123 08:43:59.301612  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:59.305940  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1123 08:43:59.306010  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:43:59.332361  206485 cri.go:89] found id: ""
	I1123 08:43:59.332384  206485 logs.go:282] 0 containers: []
	W1123 08:43:59.332394  206485 logs.go:284] No container was found matching "coredns"
	I1123 08:43:59.332402  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:43:59.332453  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:43:59.360415  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:59.360515  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:59.360533  206485 cri.go:89] found id: ""
	I1123 08:43:59.360541  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:43:59.360600  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:59.364967  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:59.369350  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:43:59.369411  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:43:59.400932  206485 cri.go:89] found id: ""
	I1123 08:43:59.400960  206485 logs.go:282] 0 containers: []
	W1123 08:43:59.400971  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:43:59.400979  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:43:59.401039  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:43:59.426988  206485 cri.go:89] found id: "5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:59.427009  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:59.427013  206485 cri.go:89] found id: ""
	I1123 08:43:59.427019  206485 logs.go:282] 2 containers: [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:43:59.427065  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:59.431308  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:59.435139  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:43:59.435187  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:43:59.461062  206485 cri.go:89] found id: ""
	I1123 08:43:59.461089  206485 logs.go:282] 0 containers: []
	W1123 08:43:59.461098  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:43:59.461106  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:43:59.461156  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:43:59.487437  206485 cri.go:89] found id: ""
	I1123 08:43:59.487458  206485 logs.go:282] 0 containers: []
	W1123 08:43:59.487467  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:43:59.487476  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:43:59.487487  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:59.520087  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:43:59.520115  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:43:59.551620  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:43:59.551662  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:43:59.610836  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:43:59.610857  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:43:59.610875  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:59.647413  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:43:59.647458  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:59.686992  206485 logs.go:123] Gathering logs for kube-controller-manager [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b] ...
	I1123 08:43:59.687024  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:59.724084  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:43:59.724115  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:59.760830  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:43:59.760916  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:43:59.811485  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:43:59.811519  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:43:59.920592  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:43:59.920624  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:43:59.937635  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:43:59.937681  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:59.974909  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:43:59.974948  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:57.830451  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:58.330875  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:58.830628  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:59.330282  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:59.830162  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:00.330422  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:00.830950  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:01.330805  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:01.830841  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:02.330880  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:02.414724  258086 kubeadm.go:1114] duration metric: took 5.161257652s to wait for elevateKubeSystemPrivileges
	I1123 08:44:02.414756  258086 kubeadm.go:403] duration metric: took 15.737896165s to StartCluster
	I1123 08:44:02.414776  258086 settings.go:142] acquiring lock: {Name:mk2c00a8b461754a49d5c7fd5af34c7d1005153a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:02.414842  258086 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-13876/kubeconfig
	I1123 08:44:02.416821  258086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/kubeconfig: {Name:mk636046b7146fd65b5638a6d549b76e61f7f055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:02.417741  258086 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:44:02.417762  258086 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:44:02.417786  258086 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:44:02.417889  258086 addons.go:70] Setting storage-provisioner=true in profile "no-preload-999106"
	I1123 08:44:02.417910  258086 addons.go:239] Setting addon storage-provisioner=true in "no-preload-999106"
	I1123 08:44:02.417926  258086 addons.go:70] Setting default-storageclass=true in profile "no-preload-999106"
	I1123 08:44:02.417947  258086 config.go:182] Loaded profile config "no-preload-999106": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:44:02.417950  258086 host.go:66] Checking if "no-preload-999106" exists ...
	I1123 08:44:02.417952  258086 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-999106"
	I1123 08:44:02.418452  258086 cli_runner.go:164] Run: docker container inspect no-preload-999106 --format={{.State.Status}}
	I1123 08:44:02.418590  258086 cli_runner.go:164] Run: docker container inspect no-preload-999106 --format={{.State.Status}}
	I1123 08:44:02.419817  258086 out.go:179] * Verifying Kubernetes components...
	I1123 08:44:02.422556  258086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:02.448285  258086 addons.go:239] Setting addon default-storageclass=true in "no-preload-999106"
	I1123 08:44:02.448336  258086 host.go:66] Checking if "no-preload-999106" exists ...
	I1123 08:44:02.448496  258086 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:44:02.448879  258086 cli_runner.go:164] Run: docker container inspect no-preload-999106 --format={{.State.Status}}
	I1123 08:44:02.449866  258086 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:02.449888  258086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:44:02.449940  258086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-999106
	I1123 08:44:02.479849  258086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/no-preload-999106/id_rsa Username:docker}
	I1123 08:44:02.481186  258086 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:02.481210  258086 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:44:02.481267  258086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-999106
	I1123 08:44:02.506758  258086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/no-preload-999106/id_rsa Username:docker}
	I1123 08:44:02.518200  258086 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:44:02.581982  258086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:44:02.612639  258086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:02.629441  258086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:02.722551  258086 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 08:44:02.724186  258086 node_ready.go:35] waiting up to 6m0s for node "no-preload-999106" to be "Ready" ...
	I1123 08:44:02.952603  258086 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 08:44:02.531044  206485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:44:02.531451  206485 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:44:02.531515  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:44:02.531572  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:44:02.568683  206485 cri.go:89] found id: "630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:44:02.568716  206485 cri.go:89] found id: "fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:44:02.568723  206485 cri.go:89] found id: ""
	I1123 08:44:02.568732  206485 logs.go:282] 2 containers: [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3]
	I1123 08:44:02.568799  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:02.573171  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:02.577424  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1123 08:44:02.577582  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:44:02.618894  206485 cri.go:89] found id: "044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:44:02.618923  206485 cri.go:89] found id: ""
	I1123 08:44:02.618932  206485 logs.go:282] 1 containers: [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1]
	I1123 08:44:02.618987  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:02.624397  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1123 08:44:02.624456  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:44:02.659100  206485 cri.go:89] found id: ""
	I1123 08:44:02.659131  206485 logs.go:282] 0 containers: []
	W1123 08:44:02.659143  206485 logs.go:284] No container was found matching "coredns"
	I1123 08:44:02.659151  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:44:02.659213  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:44:02.694829  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:44:02.694848  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:44:02.694852  206485 cri.go:89] found id: ""
	I1123 08:44:02.694859  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:44:02.694907  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:02.700604  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:02.705763  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:44:02.705843  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:44:02.741480  206485 cri.go:89] found id: ""
	I1123 08:44:02.741510  206485 logs.go:282] 0 containers: []
	W1123 08:44:02.741523  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:44:02.741529  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:44:02.741595  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:44:02.778417  206485 cri.go:89] found id: "5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:44:02.778442  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:44:02.778448  206485 cri.go:89] found id: ""
	I1123 08:44:02.778456  206485 logs.go:282] 2 containers: [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:44:02.778518  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:02.784422  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:02.789717  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:44:02.789794  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:44:02.821165  206485 cri.go:89] found id: ""
	I1123 08:44:02.821194  206485 logs.go:282] 0 containers: []
	W1123 08:44:02.821205  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:44:02.821216  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:44:02.821271  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:44:02.852719  206485 cri.go:89] found id: ""
	I1123 08:44:02.852745  206485 logs.go:282] 0 containers: []
	W1123 08:44:02.852754  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:44:02.852766  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:44:02.852785  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:44:02.892590  206485 logs.go:123] Gathering logs for kube-controller-manager [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b] ...
	I1123 08:44:02.892629  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:44:02.926138  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:44:02.926174  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:44:02.962943  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:44:02.962982  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:44:02.999133  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:44:02.999165  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:44:03.103866  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:44:03.103901  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:44:03.118230  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:44:03.118258  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:44:03.152826  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:44:03.152853  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:44:03.207774  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:44:03.207809  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:44:03.255093  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:44:03.255135  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:44:03.316127  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:44:03.316156  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:44:03.316171  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:44:03.350816  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:44:03.350855  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:44:05.885724  206485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:44:05.886146  206485 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:44:05.886208  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:44:05.886271  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:44:05.912631  206485 cri.go:89] found id: "630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:44:05.912667  206485 cri.go:89] found id: "fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:44:05.912672  206485 cri.go:89] found id: ""
	I1123 08:44:05.912681  206485 logs.go:282] 2 containers: [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3]
	I1123 08:44:05.912736  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:05.916915  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:05.920714  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1123 08:44:05.920785  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:44:05.948197  206485 cri.go:89] found id: "044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:44:05.948226  206485 cri.go:89] found id: ""
	I1123 08:44:05.948237  206485 logs.go:282] 1 containers: [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1]
	I1123 08:44:05.948297  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:05.952344  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1123 08:44:05.952394  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:44:05.979281  206485 cri.go:89] found id: ""
	I1123 08:44:05.979302  206485 logs.go:282] 0 containers: []
	W1123 08:44:05.979309  206485 logs.go:284] No container was found matching "coredns"
	I1123 08:44:05.979315  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:44:05.979360  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:44:06.005748  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:44:06.005775  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:44:06.005781  206485 cri.go:89] found id: ""
	I1123 08:44:06.005790  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:44:06.005842  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:06.009813  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:06.013567  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:44:06.013631  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:44:06.040041  206485 cri.go:89] found id: ""
	I1123 08:44:06.040069  206485 logs.go:282] 0 containers: []
	W1123 08:44:06.040082  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:44:06.040090  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:44:06.040146  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:44:06.068400  206485 cri.go:89] found id: "5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:44:06.068423  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:44:06.068428  206485 cri.go:89] found id: ""
	I1123 08:44:06.068435  206485 logs.go:282] 2 containers: [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:44:06.068489  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:06.072472  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:06.076295  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:44:06.076354  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:44:06.102497  206485 cri.go:89] found id: ""
	I1123 08:44:06.102525  206485 logs.go:282] 0 containers: []
	W1123 08:44:06.102538  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:44:06.102546  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:44:06.102607  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:44:06.130104  206485 cri.go:89] found id: ""
	I1123 08:44:06.130125  206485 logs.go:282] 0 containers: []
	W1123 08:44:06.130132  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:44:06.130141  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:44:06.130150  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:44:06.219429  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:44:06.219465  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:44:06.278463  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:44:06.278491  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:44:06.278507  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:44:06.315308  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:44:06.315344  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:44:06.374595  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:44:06.374627  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:44:06.404338  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:44:06.404365  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:44:06.453101  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:44:06.453130  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:44:06.466457  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:44:06.466503  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:44:06.499235  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:44:06.499264  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:44:06.531782  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:44:06.531811  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:44:06.567190  206485 logs.go:123] Gathering logs for kube-controller-manager [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b] ...
	I1123 08:44:06.567225  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:44:06.595596  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:44:06.595626  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:44:02.953927  258086 addons.go:530] duration metric: took 536.142427ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:44:03.227564  258086 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-999106" context rescaled to 1 replicas
	W1123 08:44:04.727505  258086 node_ready.go:57] node "no-preload-999106" has "Ready":"False" status (will retry)
	W1123 08:44:07.227319  258086 node_ready.go:57] node "no-preload-999106" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	1357388ae0aa5       56cc512116c8f       8 seconds ago       Running             busybox                   0                   34632f38cdf63       busybox                                          default
	80475d9bc2771       ead0a4a53df89       13 seconds ago      Running             coredns                   0                   cd75a3dc79d90       coredns-5dd5756b68-2fdsv                         kube-system
	089b66b211cc0       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   8489f4374b9ca       storage-provisioner                              kube-system
	39b3d72b0119b       409467f978b4a       24 seconds ago      Running             kindnet-cni               0                   4e7fe0b0a93a6       kindnet-q8xnm                                    kube-system
	ef4e4389e44ca       ea1030da44aa1       27 seconds ago      Running             kube-proxy                0                   5b9d69d308423       kube-proxy-tzq9b                                 kube-system
	0ef7f303a2ce3       f6f496300a2ae       46 seconds ago      Running             kube-scheduler            0                   2757f6f1f2847       kube-scheduler-old-k8s-version-204346            kube-system
	8f2985624466e       4be79c38a4bab       46 seconds ago      Running             kube-controller-manager   0                   7d13da4692cf0       kube-controller-manager-old-k8s-version-204346   kube-system
	328d012e2a9c6       bb5e0dde9054c       46 seconds ago      Running             kube-apiserver            0                   801b406a053e0       kube-apiserver-old-k8s-version-204346            kube-system
	09bd2ad51bcbe       73deb9a3f7025       46 seconds ago      Running             etcd                      0                   bd3a3ff71b569       etcd-old-k8s-version-204346                      kube-system
	
	
	==> containerd <==
	Nov 23 08:43:57 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:43:57.554367695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-2fdsv,Uid:1c71e052-b3c2-4875-8aeb-7d724ee26e06,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd75a3dc79d9055a439d60e0b8c3a0eaf0c09774664074c042478ddbd42d8ed7\""
	Nov 23 08:43:57 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:43:57.557881691Z" level=info msg="CreateContainer within sandbox \"cd75a3dc79d9055a439d60e0b8c3a0eaf0c09774664074c042478ddbd42d8ed7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 23 08:43:57 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:43:57.565420837Z" level=info msg="Container 80475d9bc2771a5b76c88ec3e691c3e9e026b5054aa1bbf27b0fd3499a79fd1a: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:43:57 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:43:57.572367270Z" level=info msg="CreateContainer within sandbox \"cd75a3dc79d9055a439d60e0b8c3a0eaf0c09774664074c042478ddbd42d8ed7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"80475d9bc2771a5b76c88ec3e691c3e9e026b5054aa1bbf27b0fd3499a79fd1a\""
	Nov 23 08:43:57 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:43:57.573105266Z" level=info msg="StartContainer for \"80475d9bc2771a5b76c88ec3e691c3e9e026b5054aa1bbf27b0fd3499a79fd1a\""
	Nov 23 08:43:57 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:43:57.573985605Z" level=info msg="connecting to shim 80475d9bc2771a5b76c88ec3e691c3e9e026b5054aa1bbf27b0fd3499a79fd1a" address="unix:///run/containerd/s/402875f21b0b7b033dcd7b3cca8f2720835d3f90418b17dd5f3df52485b09e0c" protocol=ttrpc version=3
	Nov 23 08:43:57 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:43:57.602588352Z" level=info msg="StartContainer for \"089b66b211cc086767c9fdf40aba06bcf7b4484c0976381a4bdf51afe2621f61\" returns successfully"
	Nov 23 08:43:57 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:43:57.630751490Z" level=info msg="StartContainer for \"80475d9bc2771a5b76c88ec3e691c3e9e026b5054aa1bbf27b0fd3499a79fd1a\" returns successfully"
	Nov 23 08:44:00 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:00.171495043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:85a1fcd5-ee10-4749-9dec-40efed82eb3e,Namespace:default,Attempt:0,}"
	Nov 23 08:44:00 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:00.210794452Z" level=info msg="connecting to shim 34632f38cdf63a655e8bb7d39dd15ba97b0a7a53c3d2190fc06701fde9c49996" address="unix:///run/containerd/s/9131634b5b9e099a09d55b33b67bba908aad637f11b87abf7ed2211b15f763a9" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:44:00 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:00.287286149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:85a1fcd5-ee10-4749-9dec-40efed82eb3e,Namespace:default,Attempt:0,} returns sandbox id \"34632f38cdf63a655e8bb7d39dd15ba97b0a7a53c3d2190fc06701fde9c49996\""
	Nov 23 08:44:00 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:00.289225870Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.394106458Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.394929355Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396643"
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.396449964Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.399611876Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.400256412Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.110984688s"
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.400309785Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.402701592Z" level=info msg="CreateContainer within sandbox \"34632f38cdf63a655e8bb7d39dd15ba97b0a7a53c3d2190fc06701fde9c49996\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.410744826Z" level=info msg="Container 1357388ae0aa594dabe5692b9f6c39afa871a26d6dd0b5809e1510839a986dd5: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.419870192Z" level=info msg="CreateContainer within sandbox \"34632f38cdf63a655e8bb7d39dd15ba97b0a7a53c3d2190fc06701fde9c49996\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"1357388ae0aa594dabe5692b9f6c39afa871a26d6dd0b5809e1510839a986dd5\""
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.421053047Z" level=info msg="StartContainer for \"1357388ae0aa594dabe5692b9f6c39afa871a26d6dd0b5809e1510839a986dd5\""
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.422071051Z" level=info msg="connecting to shim 1357388ae0aa594dabe5692b9f6c39afa871a26d6dd0b5809e1510839a986dd5" address="unix:///run/containerd/s/9131634b5b9e099a09d55b33b67bba908aad637f11b87abf7ed2211b15f763a9" protocol=ttrpc version=3
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.495260690Z" level=info msg="StartContainer for \"1357388ae0aa594dabe5692b9f6c39afa871a26d6dd0b5809e1510839a986dd5\" returns successfully"
	Nov 23 08:44:09 old-k8s-version-204346 containerd[661]: E1123 08:44:09.948064     661 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [80475d9bc2771a5b76c88ec3e691c3e9e026b5054aa1bbf27b0fd3499a79fd1a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38064 - 25011 "HINFO IN 3150570816276822377.3169321318277058455. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024835318s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-204346
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-204346
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=old-k8s-version-204346
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_43_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:43:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-204346
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:44:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:44:00 +0000   Sun, 23 Nov 2025 08:43:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:44:00 +0000   Sun, 23 Nov 2025 08:43:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:44:00 +0000   Sun, 23 Nov 2025 08:43:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:44:00 +0000   Sun, 23 Nov 2025 08:43:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-204346
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                ddf0e41b-1230-4041-b2b0-aca7ba0a6fe4
	  Boot ID:                    3bab2277-1db4-4284-9fcc-5d1d58e87eb4
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-2fdsv                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-old-k8s-version-204346                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         42s
	  kube-system                 kindnet-q8xnm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-204346             250m (3%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-204346    200m (2%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-tzq9b                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-204346             100m (1%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 47s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node old-k8s-version-204346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node old-k8s-version-204346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x7 over 47s)  kubelet          Node old-k8s-version-204346 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  47s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  42s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  42s                kubelet          Node old-k8s-version-204346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s                kubelet          Node old-k8s-version-204346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s                kubelet          Node old-k8s-version-204346 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-204346 event: Registered Node old-k8s-version-204346 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-204346 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000998] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.395963] i8042: Warning: Keylock active
	[  +0.012075] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497035] block sda: the capability attribute has been deprecated.
	[  +0.088048] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.022581] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.308229] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [09bd2ad51bcbe3133715a0348c39fbd488688f92fdc757fef7b242366c6eb72b] <==
	{"level":"info","ts":"2025-11-23T08:43:25.072307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-11-23T08:43:25.072449Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-11-23T08:43:25.073769Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T08:43:25.074175Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T08:43:25.073803Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-11-23T08:43:25.074517Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-11-23T08:43:25.074362Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T08:43:25.459144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-23T08:43:25.459188Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-23T08:43:25.459233Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-11-23T08:43:25.459253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-11-23T08:43:25.459261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-11-23T08:43:25.459281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-11-23T08:43:25.459298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-11-23T08:43:25.460336Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-204346 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T08:43:25.460368Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:43:25.460352Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:43:25.460547Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T08:43:25.46207Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T08:43:25.460343Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:43:25.46151Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:43:25.462309Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:43:25.462347Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:43:25.461945Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T08:43:25.466791Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 08:44:11 up  1:26,  0 user,  load average: 2.68, 2.53, 1.78
	Linux old-k8s-version-204346 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [39b3d72b0119bcc6ecd6e57b170ea19f5592bba7f48f0436c996349c8ca348dd] <==
	I1123 08:43:46.866967       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:43:46.867287       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1123 08:43:46.867434       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:43:46.867454       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:43:46.867482       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:43:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:43:47.067711       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:43:47.067748       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:43:47.067760       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:43:47.067904       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:43:47.369355       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:43:47.369384       1 metrics.go:72] Registering metrics
	I1123 08:43:47.369441       1 controller.go:711] "Syncing nftables rules"
	I1123 08:43:57.076844       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 08:43:57.076915       1 main.go:301] handling current node
	I1123 08:44:07.068039       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 08:44:07.068093       1 main.go:301] handling current node
	
	
	==> kube-apiserver [328d012e2a9c60b89bce2737c3bcb6c1f31581c21f2a3f2969cf002ad66bc982] <==
	I1123 08:43:26.887380       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:43:26.887389       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:43:26.887641       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1123 08:43:26.887685       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1123 08:43:26.887980       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 08:43:26.888304       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1123 08:43:26.889201       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1123 08:43:26.889373       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 08:43:26.893730       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 08:43:27.092344       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:43:27.794220       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:43:27.798285       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:43:27.798301       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:43:28.278123       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:43:28.347605       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:43:28.396516       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:43:28.402119       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1123 08:43:28.403251       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 08:43:28.410689       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:43:28.846011       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 08:43:29.796332       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 08:43:29.808173       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:43:29.820075       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1123 08:43:42.454084       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1123 08:43:42.555727       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [8f2985624466e7aea2ab0922f065c597c0bfd5950e9a7d9af9278d532ea162aa] <==
	I1123 08:43:42.301940       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 08:43:42.313117       1 shared_informer.go:318] Caches are synced for endpoint
	I1123 08:43:42.320707       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 08:43:42.468731       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tzq9b"
	I1123 08:43:42.470032       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-q8xnm"
	I1123 08:43:42.562465       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1123 08:43:42.637391       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:43:42.693556       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:43:42.693596       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 08:43:42.710317       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-j49bt"
	I1123 08:43:42.720116       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-2fdsv"
	I1123 08:43:42.729591       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="167.450584ms"
	I1123 08:43:42.750029       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.070236ms"
	I1123 08:43:42.772635       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.530968ms"
	I1123 08:43:42.772808       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.9µs"
	I1123 08:43:42.817260       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1123 08:43:42.828181       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-j49bt"
	I1123 08:43:42.834660       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.534321ms"
	I1123 08:43:42.847353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.631926ms"
	I1123 08:43:42.847627       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="198.148µs"
	I1123 08:43:57.121773       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="186.5µs"
	I1123 08:43:57.150540       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.97µs"
	I1123 08:43:57.197693       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1123 08:43:57.981361       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.07769ms"
	I1123 08:43:57.981507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.031µs"
	
	
	==> kube-proxy [ef4e4389e44ca59002bc45aac4774894eff14408a6f6654c403f41a7f5ae9178] <==
	I1123 08:43:43.138692       1 server_others.go:69] "Using iptables proxy"
	I1123 08:43:43.148849       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1123 08:43:43.173806       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:43:43.177107       1 server_others.go:152] "Using iptables Proxier"
	I1123 08:43:43.177190       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 08:43:43.177209       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 08:43:43.177247       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 08:43:43.177554       1 server.go:846] "Version info" version="v1.28.0"
	I1123 08:43:43.177673       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:43:43.178478       1 config.go:188] "Starting service config controller"
	I1123 08:43:43.178510       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 08:43:43.179694       1 config.go:97] "Starting endpoint slice config controller"
	I1123 08:43:43.179818       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 08:43:43.180065       1 config.go:315] "Starting node config controller"
	I1123 08:43:43.180084       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 08:43:43.280364       1 shared_informer.go:318] Caches are synced for service config
	I1123 08:43:43.280485       1 shared_informer.go:318] Caches are synced for node config
	I1123 08:43:43.280575       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0ef7f303a2ce364a193b1c3a534acf3ce3197306c4c2cc9dd0d5717ae9adf953] <==
	W1123 08:43:26.854417       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1123 08:43:26.854437       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1123 08:43:26.854443       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1123 08:43:26.854473       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1123 08:43:26.854661       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1123 08:43:26.854686       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1123 08:43:26.854994       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1123 08:43:26.855027       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1123 08:43:27.681328       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1123 08:43:27.681369       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1123 08:43:27.807379       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1123 08:43:27.807413       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1123 08:43:27.818838       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1123 08:43:27.818882       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1123 08:43:27.819991       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1123 08:43:27.820027       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1123 08:43:27.871687       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1123 08:43:27.871733       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 08:43:27.919852       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1123 08:43:27.919895       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1123 08:43:28.036804       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1123 08:43:28.036839       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1123 08:43:28.055978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1123 08:43:28.056016       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1123 08:43:29.649311       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.141354    1529 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.142046    1529 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.476770    1529 topology_manager.go:215] "Topology Admit Handler" podUID="5d122719-2577-438f-bae7-72a1034f88ef" podNamespace="kube-system" podName="kube-proxy-tzq9b"
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.478900    1529 topology_manager.go:215] "Topology Admit Handler" podUID="c3178adf-8eb3-4210-9674-fdda89d3317d" podNamespace="kube-system" podName="kindnet-q8xnm"
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.651490    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksdwp\" (UniqueName: \"kubernetes.io/projected/5d122719-2577-438f-bae7-72a1034f88ef-kube-api-access-ksdwp\") pod \"kube-proxy-tzq9b\" (UID: \"5d122719-2577-438f-bae7-72a1034f88ef\") " pod="kube-system/kube-proxy-tzq9b"
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.651698    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3178adf-8eb3-4210-9674-fdda89d3317d-lib-modules\") pod \"kindnet-q8xnm\" (UID: \"c3178adf-8eb3-4210-9674-fdda89d3317d\") " pod="kube-system/kindnet-q8xnm"
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.651862    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d122719-2577-438f-bae7-72a1034f88ef-lib-modules\") pod \"kube-proxy-tzq9b\" (UID: \"5d122719-2577-438f-bae7-72a1034f88ef\") " pod="kube-system/kube-proxy-tzq9b"
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.651898    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c3178adf-8eb3-4210-9674-fdda89d3317d-cni-cfg\") pod \"kindnet-q8xnm\" (UID: \"c3178adf-8eb3-4210-9674-fdda89d3317d\") " pod="kube-system/kindnet-q8xnm"
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.651928    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3178adf-8eb3-4210-9674-fdda89d3317d-xtables-lock\") pod \"kindnet-q8xnm\" (UID: \"c3178adf-8eb3-4210-9674-fdda89d3317d\") " pod="kube-system/kindnet-q8xnm"
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.651960    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9ntt\" (UniqueName: \"kubernetes.io/projected/c3178adf-8eb3-4210-9674-fdda89d3317d-kube-api-access-m9ntt\") pod \"kindnet-q8xnm\" (UID: \"c3178adf-8eb3-4210-9674-fdda89d3317d\") " pod="kube-system/kindnet-q8xnm"
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.651992    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5d122719-2577-438f-bae7-72a1034f88ef-kube-proxy\") pod \"kube-proxy-tzq9b\" (UID: \"5d122719-2577-438f-bae7-72a1034f88ef\") " pod="kube-system/kube-proxy-tzq9b"
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.652021    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d122719-2577-438f-bae7-72a1034f88ef-xtables-lock\") pod \"kube-proxy-tzq9b\" (UID: \"5d122719-2577-438f-bae7-72a1034f88ef\") " pod="kube-system/kube-proxy-tzq9b"
	Nov 23 08:43:46 old-k8s-version-204346 kubelet[1529]: I1123 08:43:46.940830    1529 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tzq9b" podStartSLOduration=4.940768474 podCreationTimestamp="2025-11-23 08:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:43:43.932316562 +0000 UTC m=+14.168739010" watchObservedRunningTime="2025-11-23 08:43:46.940768474 +0000 UTC m=+17.177190922"
	Nov 23 08:43:46 old-k8s-version-204346 kubelet[1529]: I1123 08:43:46.940988    1529 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-q8xnm" podStartSLOduration=1.718157541 podCreationTimestamp="2025-11-23 08:43:42 +0000 UTC" firstStartedPulling="2025-11-23 08:43:43.30687244 +0000 UTC m=+13.543294877" lastFinishedPulling="2025-11-23 08:43:46.52967151 +0000 UTC m=+16.766093948" observedRunningTime="2025-11-23 08:43:46.940594815 +0000 UTC m=+17.177017264" watchObservedRunningTime="2025-11-23 08:43:46.940956612 +0000 UTC m=+17.177379059"
	Nov 23 08:43:57 old-k8s-version-204346 kubelet[1529]: I1123 08:43:57.093693    1529 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 23 08:43:57 old-k8s-version-204346 kubelet[1529]: I1123 08:43:57.122486    1529 topology_manager.go:215] "Topology Admit Handler" podUID="1c71e052-b3c2-4875-8aeb-7d724ee26e06" podNamespace="kube-system" podName="coredns-5dd5756b68-2fdsv"
	Nov 23 08:43:57 old-k8s-version-204346 kubelet[1529]: I1123 08:43:57.122759    1529 topology_manager.go:215] "Topology Admit Handler" podUID="372382d8-d23f-4e6d-89ae-8f2c9c46b6dc" podNamespace="kube-system" podName="storage-provisioner"
	Nov 23 08:43:57 old-k8s-version-204346 kubelet[1529]: I1123 08:43:57.263400    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c71e052-b3c2-4875-8aeb-7d724ee26e06-config-volume\") pod \"coredns-5dd5756b68-2fdsv\" (UID: \"1c71e052-b3c2-4875-8aeb-7d724ee26e06\") " pod="kube-system/coredns-5dd5756b68-2fdsv"
	Nov 23 08:43:57 old-k8s-version-204346 kubelet[1529]: I1123 08:43:57.263464    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-474bl\" (UniqueName: \"kubernetes.io/projected/1c71e052-b3c2-4875-8aeb-7d724ee26e06-kube-api-access-474bl\") pod \"coredns-5dd5756b68-2fdsv\" (UID: \"1c71e052-b3c2-4875-8aeb-7d724ee26e06\") " pod="kube-system/coredns-5dd5756b68-2fdsv"
	Nov 23 08:43:57 old-k8s-version-204346 kubelet[1529]: I1123 08:43:57.263575    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/372382d8-d23f-4e6d-89ae-8f2c9c46b6dc-tmp\") pod \"storage-provisioner\" (UID: \"372382d8-d23f-4e6d-89ae-8f2c9c46b6dc\") " pod="kube-system/storage-provisioner"
	Nov 23 08:43:57 old-k8s-version-204346 kubelet[1529]: I1123 08:43:57.263625    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cbg7\" (UniqueName: \"kubernetes.io/projected/372382d8-d23f-4e6d-89ae-8f2c9c46b6dc-kube-api-access-2cbg7\") pod \"storage-provisioner\" (UID: \"372382d8-d23f-4e6d-89ae-8f2c9c46b6dc\") " pod="kube-system/storage-provisioner"
	Nov 23 08:43:57 old-k8s-version-204346 kubelet[1529]: I1123 08:43:57.963727    1529 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.963673229 podCreationTimestamp="2025-11-23 08:43:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:43:57.963551537 +0000 UTC m=+28.199973987" watchObservedRunningTime="2025-11-23 08:43:57.963673229 +0000 UTC m=+28.200095677"
	Nov 23 08:43:57 old-k8s-version-204346 kubelet[1529]: I1123 08:43:57.974383    1529 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-2fdsv" podStartSLOduration=15.974330092 podCreationTimestamp="2025-11-23 08:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:43:57.974110377 +0000 UTC m=+28.210532825" watchObservedRunningTime="2025-11-23 08:43:57.974330092 +0000 UTC m=+28.210752539"
	Nov 23 08:43:59 old-k8s-version-204346 kubelet[1529]: I1123 08:43:59.862724    1529 topology_manager.go:215] "Topology Admit Handler" podUID="85a1fcd5-ee10-4749-9dec-40efed82eb3e" podNamespace="default" podName="busybox"
	Nov 23 08:43:59 old-k8s-version-204346 kubelet[1529]: I1123 08:43:59.981400    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdg6d\" (UniqueName: \"kubernetes.io/projected/85a1fcd5-ee10-4749-9dec-40efed82eb3e-kube-api-access-tdg6d\") pod \"busybox\" (UID: \"85a1fcd5-ee10-4749-9dec-40efed82eb3e\") " pod="default/busybox"
	
	
	==> storage-provisioner [089b66b211cc086767c9fdf40aba06bcf7b4484c0976381a4bdf51afe2621f61] <==
	I1123 08:43:57.613751       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:43:57.624633       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:43:57.624700       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 08:43:57.633950       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:43:57.634082       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a0771e73-2533-4e9a-bd83-ee78487b1f50", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-204346_bff6cf86-fcf0-4fe3-b85e-b85b2509b23f became leader
	I1123 08:43:57.634291       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-204346_bff6cf86-fcf0-4fe3-b85e-b85b2509b23f!
	I1123 08:43:57.734684       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-204346_bff6cf86-fcf0-4fe3-b85e-b85b2509b23f!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-204346 -n old-k8s-version-204346
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-204346 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-204346
helpers_test.go:243: (dbg) docker inspect old-k8s-version-204346:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74b9ec6867739b46c46d250281e773e2e1e6e55633355a3143f6c35242c78716",
	        "Created": "2025-11-23T08:43:13.914336238Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 255015,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:43:13.954859222Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/74b9ec6867739b46c46d250281e773e2e1e6e55633355a3143f6c35242c78716/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74b9ec6867739b46c46d250281e773e2e1e6e55633355a3143f6c35242c78716/hostname",
	        "HostsPath": "/var/lib/docker/containers/74b9ec6867739b46c46d250281e773e2e1e6e55633355a3143f6c35242c78716/hosts",
	        "LogPath": "/var/lib/docker/containers/74b9ec6867739b46c46d250281e773e2e1e6e55633355a3143f6c35242c78716/74b9ec6867739b46c46d250281e773e2e1e6e55633355a3143f6c35242c78716-json.log",
	        "Name": "/old-k8s-version-204346",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-204346:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-204346",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "74b9ec6867739b46c46d250281e773e2e1e6e55633355a3143f6c35242c78716",
	                "LowerDir": "/var/lib/docker/overlay2/c1a2c09b9684904e47b03e9569e26d403b09f5d541f2cb59b94c6e639ed9b4e3-init/diff:/var/lib/docker/overlay2/ee04ca8b85d0dedeb02bd9a5189a59a7f53ca89a011d262a78df32fa43bf0598/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c1a2c09b9684904e47b03e9569e26d403b09f5d541f2cb59b94c6e639ed9b4e3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c1a2c09b9684904e47b03e9569e26d403b09f5d541f2cb59b94c6e639ed9b4e3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c1a2c09b9684904e47b03e9569e26d403b09f5d541f2cb59b94c6e639ed9b4e3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-204346",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-204346/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-204346",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-204346",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-204346",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "db03bea2ae002bb3595102e41f0b3c5dd373e7f121cbf490c03f867ac8b10fc2",
	            "SandboxKey": "/var/run/docker/netns/db03bea2ae00",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-204346": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2c3268f545c0648cec3972c75676102d767b9cbd699aea51b301ba1de04cad51",
	                    "EndpointID": "a6fed4b2c7bb6c663b8e774c8e64911b07fef263695c45641973d777a7144fb2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "1a:83:9b:a0:7e:0e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-204346",
	                        "74b9ec686773"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-204346 -n old-k8s-version-204346
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-204346 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-204346 logs -n 25: (1.075446276s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ start   │ -p force-systemd-flag-570956 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-570956 │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ start   │ -p NoKubernetes-846693 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                         │ NoKubernetes-846693       │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ ssh     │ -p NoKubernetes-846693 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-846693       │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │                     │
	│ ssh     │ force-systemd-env-352249 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-352249  │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ delete  │ -p force-systemd-env-352249                                                                                                                                                                                                                         │ force-systemd-env-352249  │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ start   │ -p cert-expiration-680868 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-680868    │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:42 UTC │
	│ ssh     │ force-systemd-flag-570956 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-570956 │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ delete  │ -p force-systemd-flag-570956                                                                                                                                                                                                                        │ force-systemd-flag-570956 │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ start   │ -p cert-options-194967 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-194967       │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:42 UTC │
	│ stop    │ -p NoKubernetes-846693                                                                                                                                                                                                                              │ NoKubernetes-846693       │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ start   │ -p NoKubernetes-846693 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-846693       │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:42 UTC │
	│ ssh     │ -p NoKubernetes-846693 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-846693       │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │                     │
	│ delete  │ -p NoKubernetes-846693                                                                                                                                                                                                                              │ NoKubernetes-846693       │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ start   │ -p missing-upgrade-231159 --memory=3072 --driver=docker  --container-runtime=containerd                                                                                                                                                             │ missing-upgrade-231159    │ jenkins │ v1.32.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ ssh     │ cert-options-194967 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-194967       │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ ssh     │ -p cert-options-194967 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-194967       │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ delete  │ -p cert-options-194967                                                                                                                                                                                                                              │ cert-options-194967       │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ start   │ -p stopped-upgrade-595653 --memory=3072 --vm-driver=docker  --container-runtime=containerd                                                                                                                                                          │ stopped-upgrade-595653    │ jenkins │ v1.32.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ start   │ -p missing-upgrade-231159 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                                                      │ missing-upgrade-231159    │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:43 UTC │
	│ stop    │ stopped-upgrade-595653 stop                                                                                                                                                                                                                         │ stopped-upgrade-595653    │ jenkins │ v1.32.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ start   │ -p stopped-upgrade-595653 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                                                      │ stopped-upgrade-595653    │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p stopped-upgrade-595653                                                                                                                                                                                                                           │ stopped-upgrade-595653    │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ start   │ -p old-k8s-version-204346 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-204346    │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p missing-upgrade-231159                                                                                                                                                                                                                           │ missing-upgrade-231159    │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ start   │ -p no-preload-999106 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-999106         │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:43:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:43:27.495640  258086 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:43:27.495743  258086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:43:27.495751  258086 out.go:374] Setting ErrFile to fd 2...
	I1123 08:43:27.495755  258086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:43:27.495953  258086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
	I1123 08:43:27.496394  258086 out.go:368] Setting JSON to false
	I1123 08:43:27.497504  258086 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5148,"bootTime":1763882259,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:43:27.497559  258086 start.go:143] virtualization: kvm guest
	I1123 08:43:27.499449  258086 out.go:179] * [no-preload-999106] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:43:27.500767  258086 notify.go:221] Checking for updates...
	I1123 08:43:27.500781  258086 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:43:27.502005  258086 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:43:27.503191  258086 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-13876/kubeconfig
	I1123 08:43:27.504274  258086 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-13876/.minikube
	I1123 08:43:27.505281  258086 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:43:27.506287  258086 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:43:27.507765  258086 config.go:182] Loaded profile config "cert-expiration-680868": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:43:27.507859  258086 config.go:182] Loaded profile config "kubernetes-upgrade-776670": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:43:27.507939  258086 config.go:182] Loaded profile config "old-k8s-version-204346": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 08:43:27.508012  258086 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:43:27.532390  258086 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:43:27.532462  258086 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:43:27.588863  258086 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-23 08:43:27.578321532 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:43:27.588959  258086 docker.go:319] overlay module found
	I1123 08:43:27.590837  258086 out.go:179] * Using the docker driver based on user configuration
	I1123 08:43:27.592139  258086 start.go:309] selected driver: docker
	I1123 08:43:27.592164  258086 start.go:927] validating driver "docker" against <nil>
	I1123 08:43:27.592175  258086 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:43:27.592773  258086 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:43:27.653421  258086 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-23 08:43:27.643267927 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:43:27.653668  258086 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:43:27.653954  258086 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:43:27.655624  258086 out.go:179] * Using Docker driver with root privileges
	I1123 08:43:27.656995  258086 cni.go:84] Creating CNI manager for ""
	I1123 08:43:27.657071  258086 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:43:27.657084  258086 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:43:27.657159  258086 start.go:353] cluster config:
	{Name:no-preload-999106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-999106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:43:27.658480  258086 out.go:179] * Starting "no-preload-999106" primary control-plane node in "no-preload-999106" cluster
	I1123 08:43:27.659678  258086 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:43:27.660749  258086 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:43:27.661680  258086 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:43:27.661748  258086 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:43:27.661771  258086 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/config.json ...
	I1123 08:43:27.661801  258086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/config.json: {Name:mk1854d74e572dba5e78564093e1183622e9aa74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:27.661927  258086 cache.go:107] acquiring lock: {Name:mka7418a84f8d9aaa890eb7bcafd158f0f845949 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:43:27.661970  258086 cache.go:107] acquiring lock: {Name:mke646091201bbef396ff67d16f0cce49990b355 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:43:27.661948  258086 cache.go:107] acquiring lock: {Name:mk929bb8e7363fd9f8d602565b078a816979b3d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:43:27.661979  258086 cache.go:107] acquiring lock: {Name:mk667c169463661b7e999b395cc2d348440d0d0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:43:27.662058  258086 cache.go:115] /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1123 08:43:27.662070  258086 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:27.662087  258086 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:27.662069  258086 cache.go:107] acquiring lock: {Name:mk4a8ffda79c57b59d9ec0be62cf6989cc0b3dc8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:43:27.662104  258086 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:27.662089  258086 cache.go:107] acquiring lock: {Name:mkce85e18a9851767cd13073008b6382df083ea3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:43:27.662080  258086 cache.go:107] acquiring lock: {Name:mk495076811ea27b7ee848ef73ebf58029c788de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:43:27.662200  258086 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:27.662257  258086 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:27.662073  258086 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 113.368µs
	I1123 08:43:27.662298  258086 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1123 08:43:27.662298  258086 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:27.662338  258086 cache.go:107] acquiring lock: {Name:mkc513b15aec17d5c3e77aa2e6131827198f8c26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:43:27.662430  258086 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1123 08:43:27.663312  258086 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:27.663446  258086 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:27.663495  258086 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:27.663529  258086 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:27.663560  258086 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1123 08:43:27.663553  258086 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:27.663602  258086 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:27.683115  258086 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:43:27.683133  258086 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:43:27.683151  258086 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:43:27.683188  258086 start.go:360] acquireMachinesLock for no-preload-999106: {Name:mk535dea2e363deaa61ac9c5041ac2d499c9efc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:43:27.683286  258086 start.go:364] duration metric: took 77.877µs to acquireMachinesLock for "no-preload-999106"
	I1123 08:43:27.683314  258086 start.go:93] Provisioning new machine with config: &{Name:no-preload-999106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-999106 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:43:27.683378  258086 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:43:23.886201  254114 out.go:252]   - Booting up control plane ...
	I1123 08:43:23.886286  254114 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:43:23.886377  254114 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:43:23.886992  254114 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:43:23.903197  254114 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:43:23.904138  254114 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:43:23.904196  254114 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:43:24.010365  254114 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1123 08:43:28.512514  254114 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.502224 seconds
	I1123 08:43:28.512707  254114 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:43:28.525209  254114 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:43:29.051871  254114 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:43:29.052189  254114 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-204346 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:43:29.563746  254114 kubeadm.go:319] [bootstrap-token] Using token: kv40xr.vpl4w4wq1fqvcjbv
	I1123 08:43:29.565119  254114 out.go:252]   - Configuring RBAC rules ...
	I1123 08:43:29.565274  254114 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:43:29.570668  254114 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:43:29.578425  254114 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:43:29.581516  254114 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:43:29.584593  254114 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:43:29.588395  254114 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:43:29.599565  254114 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:43:29.809875  254114 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:43:29.974613  254114 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:43:29.975627  254114 kubeadm.go:319] 
	I1123 08:43:29.975755  254114 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:43:29.975777  254114 kubeadm.go:319] 
	I1123 08:43:29.975879  254114 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:43:29.975889  254114 kubeadm.go:319] 
	I1123 08:43:29.975929  254114 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:43:29.976013  254114 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:43:29.976095  254114 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:43:29.976109  254114 kubeadm.go:319] 
	I1123 08:43:29.976189  254114 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:43:29.976197  254114 kubeadm.go:319] 
	I1123 08:43:29.976265  254114 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:43:29.976274  254114 kubeadm.go:319] 
	I1123 08:43:29.976365  254114 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:43:29.976483  254114 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:43:29.976577  254114 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:43:29.976584  254114 kubeadm.go:319] 
	I1123 08:43:29.976725  254114 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:43:29.976849  254114 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:43:29.976864  254114 kubeadm.go:319] 
	I1123 08:43:29.976980  254114 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kv40xr.vpl4w4wq1fqvcjbv \
	I1123 08:43:29.977124  254114 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5c48a3b11504a9c7a5d242d913eadf6a5354a8cb06c9ffcf8385d22efb04d8fa \
	I1123 08:43:29.977157  254114 kubeadm.go:319] 	--control-plane 
	I1123 08:43:29.977166  254114 kubeadm.go:319] 
	I1123 08:43:29.977310  254114 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:43:29.977319  254114 kubeadm.go:319] 
	I1123 08:43:29.977452  254114 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kv40xr.vpl4w4wq1fqvcjbv \
	I1123 08:43:29.977614  254114 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5c48a3b11504a9c7a5d242d913eadf6a5354a8cb06c9ffcf8385d22efb04d8fa 
	I1123 08:43:29.980159  254114 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 08:43:29.980378  254114 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:43:29.980409  254114 cni.go:84] Creating CNI manager for ""
	I1123 08:43:29.980425  254114 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:43:29.984213  254114 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:43:27.685925  258086 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:43:27.686123  258086 start.go:159] libmachine.API.Create for "no-preload-999106" (driver="docker")
	I1123 08:43:27.686177  258086 client.go:173] LocalClient.Create starting
	I1123 08:43:27.686233  258086 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem
	I1123 08:43:27.686260  258086 main.go:143] libmachine: Decoding PEM data...
	I1123 08:43:27.686276  258086 main.go:143] libmachine: Parsing certificate...
	I1123 08:43:27.686316  258086 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-13876/.minikube/certs/cert.pem
	I1123 08:43:27.686334  258086 main.go:143] libmachine: Decoding PEM data...
	I1123 08:43:27.686346  258086 main.go:143] libmachine: Parsing certificate...
	I1123 08:43:27.686738  258086 cli_runner.go:164] Run: docker network inspect no-preload-999106 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:43:27.705175  258086 cli_runner.go:211] docker network inspect no-preload-999106 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:43:27.705249  258086 network_create.go:284] running [docker network inspect no-preload-999106] to gather additional debugging logs...
	I1123 08:43:27.705267  258086 cli_runner.go:164] Run: docker network inspect no-preload-999106
	W1123 08:43:27.723756  258086 cli_runner.go:211] docker network inspect no-preload-999106 returned with exit code 1
	I1123 08:43:27.723782  258086 network_create.go:287] error running [docker network inspect no-preload-999106]: docker network inspect no-preload-999106: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-999106 not found
	I1123 08:43:27.723796  258086 network_create.go:289] output of [docker network inspect no-preload-999106]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-999106 not found
	
	** /stderr **
	I1123 08:43:27.723894  258086 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:43:27.742266  258086 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5d8b9fdde185 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:76:1f:2b:8a:58:68} reservation:<nil>}
	I1123 08:43:27.742817  258086 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-103255eb2e92 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:bb:33:85:24:bc} reservation:<nil>}
	I1123 08:43:27.743314  258086 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fa9f597fddc6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b6:bb:01:5e:01:61} reservation:<nil>}
	I1123 08:43:27.743832  258086 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-da43b5ed9d8a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:8e:fe:29:08:73:55} reservation:<nil>}
	I1123 08:43:27.744448  258086 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c01e10}
	I1123 08:43:27.744470  258086 network_create.go:124] attempt to create docker network no-preload-999106 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 08:43:27.744518  258086 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-999106 no-preload-999106
	I1123 08:43:27.793693  258086 network_create.go:108] docker network no-preload-999106 192.168.85.0/24 created
	I1123 08:43:27.793726  258086 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-999106" container
	I1123 08:43:27.793798  258086 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:43:27.815508  258086 cli_runner.go:164] Run: docker volume create no-preload-999106 --label name.minikube.sigs.k8s.io=no-preload-999106 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:43:27.836788  258086 oci.go:103] Successfully created a docker volume no-preload-999106
	I1123 08:43:27.836929  258086 cli_runner.go:164] Run: docker run --rm --name no-preload-999106-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-999106 --entrypoint /usr/bin/test -v no-preload-999106:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:43:27.851417  258086 cache.go:162] opening:  /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1123 08:43:27.858908  258086 cache.go:162] opening:  /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1123 08:43:27.860347  258086 cache.go:162] opening:  /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1123 08:43:27.863442  258086 cache.go:162] opening:  /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1123 08:43:27.865314  258086 cache.go:162] opening:  /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1123 08:43:27.878248  258086 cache.go:162] opening:  /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1123 08:43:27.889986  258086 cache.go:162] opening:  /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1123 08:43:27.973948  258086 cache.go:157] /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1123 08:43:27.973981  258086 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 311.645455ms
	I1123 08:43:27.973999  258086 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1123 08:43:28.304822  258086 cache.go:157] /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1123 08:43:28.304856  258086 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 642.854298ms
	I1123 08:43:28.304870  258086 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1123 08:43:28.332384  258086 oci.go:107] Successfully prepared a docker volume no-preload-999106
	I1123 08:43:28.332436  258086 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	W1123 08:43:28.332544  258086 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 08:43:28.332582  258086 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 08:43:28.332628  258086 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:43:28.401507  258086 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-999106 --name no-preload-999106 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-999106 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-999106 --network no-preload-999106 --ip 192.168.85.2 --volume no-preload-999106:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:43:28.713710  258086 cli_runner.go:164] Run: docker container inspect no-preload-999106 --format={{.State.Running}}
	I1123 08:43:28.734068  258086 cli_runner.go:164] Run: docker container inspect no-preload-999106 --format={{.State.Status}}
	I1123 08:43:28.754748  258086 cli_runner.go:164] Run: docker exec no-preload-999106 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:43:28.804354  258086 oci.go:144] the created container "no-preload-999106" has a running status.
	I1123 08:43:28.804388  258086 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-13876/.minikube/machines/no-preload-999106/id_rsa...
	I1123 08:43:28.861878  258086 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-13876/.minikube/machines/no-preload-999106/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:43:28.899755  258086 cli_runner.go:164] Run: docker container inspect no-preload-999106 --format={{.State.Status}}
	I1123 08:43:28.921384  258086 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:43:28.921408  258086 kic_runner.go:114] Args: [docker exec --privileged no-preload-999106 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:43:28.971140  258086 cli_runner.go:164] Run: docker container inspect no-preload-999106 --format={{.State.Status}}
	I1123 08:43:28.992543  258086 machine.go:94] provisionDockerMachine start ...
	I1123 08:43:28.992659  258086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-999106
	I1123 08:43:29.017873  258086 main.go:143] libmachine: Using SSH client type: native
	I1123 08:43:29.018228  258086 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1123 08:43:29.018252  258086 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:43:29.019229  258086 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57704->127.0.0.1:33063: read: connection reset by peer
	I1123 08:43:29.339938  258086 cache.go:157] /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1123 08:43:29.339967  258086 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.677878189s
	I1123 08:43:29.339993  258086 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1123 08:43:29.349964  258086 cache.go:157] /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1123 08:43:29.349997  258086 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.688022096s
	I1123 08:43:29.350017  258086 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1123 08:43:29.423577  258086 cache.go:157] /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1123 08:43:29.423607  258086 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.761664135s
	I1123 08:43:29.423620  258086 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1123 08:43:29.487535  258086 cache.go:157] /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1123 08:43:29.487565  258086 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.825655813s
	I1123 08:43:29.487576  258086 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1123 08:43:29.829693  258086 cache.go:157] /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1123 08:43:29.829727  258086 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.16770936s
	I1123 08:43:29.829741  258086 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1123 08:43:29.829763  258086 cache.go:87] Successfully saved all images to host disk.
	I1123 08:43:32.164591  258086 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-999106
	
	I1123 08:43:32.164618  258086 ubuntu.go:182] provisioning hostname "no-preload-999106"
	I1123 08:43:32.164701  258086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-999106
	I1123 08:43:32.183134  258086 main.go:143] libmachine: Using SSH client type: native
	I1123 08:43:32.183339  258086 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1123 08:43:32.183352  258086 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-999106 && echo "no-preload-999106" | sudo tee /etc/hostname
	I1123 08:43:32.340889  258086 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-999106
	
	I1123 08:43:32.340971  258086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-999106
	I1123 08:43:32.359419  258086 main.go:143] libmachine: Using SSH client type: native
	I1123 08:43:32.359677  258086 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1123 08:43:32.359696  258086 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-999106' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-999106/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-999106' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:43:29.985991  254114 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:43:29.990966  254114 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1123 08:43:29.990985  254114 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:43:30.005005  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:43:30.649440  254114 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:43:30.649546  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:30.649581  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-204346 minikube.k8s.io/updated_at=2025_11_23T08_43_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=old-k8s-version-204346 minikube.k8s.io/primary=true
	I1123 08:43:30.659700  254114 ops.go:34] apiserver oom_adj: -16
	I1123 08:43:30.729410  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:31.230340  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:31.730113  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:32.230535  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:32.729772  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:32.505327  258086 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:43:32.505361  258086 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-13876/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-13876/.minikube}
	I1123 08:43:32.505408  258086 ubuntu.go:190] setting up certificates
	I1123 08:43:32.505430  258086 provision.go:84] configureAuth start
	I1123 08:43:32.505484  258086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-999106
	I1123 08:43:32.523951  258086 provision.go:143] copyHostCerts
	I1123 08:43:32.524019  258086 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-13876/.minikube/key.pem, removing ...
	I1123 08:43:32.524033  258086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-13876/.minikube/key.pem
	I1123 08:43:32.524115  258086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-13876/.minikube/key.pem (1675 bytes)
	I1123 08:43:32.524235  258086 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-13876/.minikube/ca.pem, removing ...
	I1123 08:43:32.524248  258086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-13876/.minikube/ca.pem
	I1123 08:43:32.524289  258086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-13876/.minikube/ca.pem (1078 bytes)
	I1123 08:43:32.524373  258086 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-13876/.minikube/cert.pem, removing ...
	I1123 08:43:32.524383  258086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-13876/.minikube/cert.pem
	I1123 08:43:32.524416  258086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-13876/.minikube/cert.pem (1123 bytes)
	I1123 08:43:32.524499  258086 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-13876/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca-key.pem org=jenkins.no-preload-999106 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-999106]
	I1123 08:43:32.587554  258086 provision.go:177] copyRemoteCerts
	I1123 08:43:32.587609  258086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:43:32.587655  258086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-999106
	I1123 08:43:32.605984  258086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/no-preload-999106/id_rsa Username:docker}
	I1123 08:43:32.708249  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 08:43:32.727969  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:43:32.747752  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:43:32.766001  258086 provision.go:87] duration metric: took 260.555897ms to configureAuth
	I1123 08:43:32.766029  258086 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:43:32.766187  258086 config.go:182] Loaded profile config "no-preload-999106": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:43:32.766198  258086 machine.go:97] duration metric: took 3.773633247s to provisionDockerMachine
	I1123 08:43:32.766204  258086 client.go:176] duration metric: took 5.080019183s to LocalClient.Create
	I1123 08:43:32.766223  258086 start.go:167] duration metric: took 5.080101552s to libmachine.API.Create "no-preload-999106"
	I1123 08:43:32.766232  258086 start.go:293] postStartSetup for "no-preload-999106" (driver="docker")
	I1123 08:43:32.766242  258086 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:43:32.766283  258086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:43:32.766317  258086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-999106
	I1123 08:43:32.785085  258086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/no-preload-999106/id_rsa Username:docker}
	I1123 08:43:32.889673  258086 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:43:32.893433  258086 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:43:32.893459  258086 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:43:32.893470  258086 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-13876/.minikube/addons for local assets ...
	I1123 08:43:32.893520  258086 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-13876/.minikube/files for local assets ...
	I1123 08:43:32.893624  258086 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-13876/.minikube/files/etc/ssl/certs/174422.pem -> 174422.pem in /etc/ssl/certs
	I1123 08:43:32.893761  258086 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:43:32.902075  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/files/etc/ssl/certs/174422.pem --> /etc/ssl/certs/174422.pem (1708 bytes)
	I1123 08:43:32.921898  258086 start.go:296] duration metric: took 155.652278ms for postStartSetup
	I1123 08:43:32.922243  258086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-999106
	I1123 08:43:32.940711  258086 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/config.json ...
	I1123 08:43:32.940999  258086 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:43:32.941041  258086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-999106
	I1123 08:43:32.959311  258086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/no-preload-999106/id_rsa Username:docker}
	I1123 08:43:33.058968  258086 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:43:33.063670  258086 start.go:128] duration metric: took 5.380278318s to createHost
	I1123 08:43:33.063696  258086 start.go:83] releasing machines lock for "no-preload-999106", held for 5.380396187s
	I1123 08:43:33.063776  258086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-999106
	I1123 08:43:33.082497  258086 ssh_runner.go:195] Run: cat /version.json
	I1123 08:43:33.082555  258086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-999106
	I1123 08:43:33.082576  258086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:43:33.082676  258086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-999106
	I1123 08:43:33.101516  258086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/no-preload-999106/id_rsa Username:docker}
	I1123 08:43:33.101929  258086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/no-preload-999106/id_rsa Username:docker}
	I1123 08:43:33.258150  258086 ssh_runner.go:195] Run: systemctl --version
	I1123 08:43:33.265003  258086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:43:33.270133  258086 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:43:33.270202  258086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:43:33.301093  258086 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 08:43:33.301114  258086 start.go:496] detecting cgroup driver to use...
	I1123 08:43:33.301140  258086 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:43:33.301187  258086 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:43:33.316380  258086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:43:33.328339  258086 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:43:33.328388  258086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:43:33.344573  258086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:43:33.362321  258086 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:43:33.449438  258086 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:43:33.532610  258086 docker.go:234] disabling docker service ...
	I1123 08:43:33.532689  258086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:43:33.551827  258086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:43:33.564985  258086 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:43:33.650121  258086 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:43:33.736173  258086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:43:33.749245  258086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:43:33.764351  258086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 08:43:33.774567  258086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:43:33.784258  258086 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1123 08:43:33.784327  258086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1123 08:43:33.794411  258086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:43:33.804033  258086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:43:33.812857  258086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:43:33.821787  258086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:43:33.829930  258086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:43:33.839002  258086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:43:33.847926  258086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:43:33.856822  258086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:43:33.864542  258086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:43:33.871885  258086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:43:33.950854  258086 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:43:34.024458  258086 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:43:34.024534  258086 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:43:34.029083  258086 start.go:564] Will wait 60s for crictl version
	I1123 08:43:34.029145  258086 ssh_runner.go:195] Run: which crictl
	I1123 08:43:34.032799  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:43:34.057987  258086 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:43:34.058049  258086 ssh_runner.go:195] Run: containerd --version
	I1123 08:43:34.078381  258086 ssh_runner.go:195] Run: containerd --version
	I1123 08:43:34.100680  258086 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 08:43:36.163341  206485 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.069407293s)
	W1123 08:43:36.163379  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1123 08:43:36.163391  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:43:36.163401  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:36.196694  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:43:36.196725  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:36.230996  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:43:36.231018  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:36.266205  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:43:36.266235  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:43:34.101669  258086 cli_runner.go:164] Run: docker network inspect no-preload-999106 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:43:34.119192  258086 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:43:34.123375  258086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:43:34.134033  258086 kubeadm.go:884] updating cluster {Name:no-preload-999106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-999106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:43:34.134129  258086 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:43:34.134170  258086 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:43:34.159373  258086 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1123 08:43:34.159392  258086 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1123 08:43:34.159438  258086 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:34.159452  258086 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:34.159485  258086 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:34.159504  258086 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:34.159534  258086 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:34.159485  258086 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1123 08:43:34.159583  258086 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:34.159658  258086 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:34.161000  258086 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:34.161332  258086 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:34.161540  258086 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:34.161951  258086 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:34.162137  258086 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:34.162179  258086 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:34.162238  258086 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1123 08:43:34.162370  258086 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:34.303423  258086 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1123 08:43:34.303507  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:34.304294  258086 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115"
	I1123 08:43:34.304346  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:34.325396  258086 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1123 08:43:34.325443  258086 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:34.325489  258086 ssh_runner.go:195] Run: which crictl
	I1123 08:43:34.325396  258086 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1123 08:43:34.325524  258086 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:34.325560  258086 ssh_runner.go:195] Run: which crictl
	I1123 08:43:34.329408  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:34.330479  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:34.332092  258086 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1123 08:43:34.332130  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1123 08:43:34.334793  258086 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1123 08:43:34.334839  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:34.334892  258086 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1123 08:43:34.334947  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:34.359405  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:34.359448  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:34.359453  258086 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1123 08:43:34.359480  258086 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1123 08:43:34.359511  258086 ssh_runner.go:195] Run: which crictl
	I1123 08:43:34.359927  258086 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1123 08:43:34.359953  258086 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:34.359986  258086 ssh_runner.go:195] Run: which crictl
	I1123 08:43:34.362071  258086 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1123 08:43:34.362107  258086 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:34.362148  258086 ssh_runner.go:195] Run: which crictl
	I1123 08:43:34.386773  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:34.388038  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:34.388124  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1123 08:43:34.388148  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:34.388227  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:34.402862  258086 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1123 08:43:34.402936  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:34.406588  258086 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1123 08:43:34.406683  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:34.419900  258086 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1123 08:43:34.420019  258086 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1123 08:43:34.422632  258086 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1123 08:43:34.422820  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1123 08:43:34.422852  258086 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1123 08:43:34.422867  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:34.422905  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:34.432625  258086 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1123 08:43:34.432698  258086 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:34.432750  258086 ssh_runner.go:195] Run: which crictl
	I1123 08:43:34.435170  258086 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1123 08:43:34.435213  258086 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:34.435236  258086 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1123 08:43:34.435258  258086 ssh_runner.go:195] Run: which crictl
	I1123 08:43:34.435263  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1123 08:43:34.468602  258086 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1123 08:43:34.468621  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1123 08:43:34.468654  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1123 08:43:34.468703  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:34.468726  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:34.468757  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:34.468795  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:34.563471  258086 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1123 08:43:34.563530  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:34.563577  258086 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1123 08:43:34.563667  258086 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1123 08:43:34.563682  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:34.563581  258086 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1123 08:43:34.563706  258086 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1123 08:43:34.563755  258086 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 08:43:34.626877  258086 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1123 08:43:34.626895  258086 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1123 08:43:34.626913  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1123 08:43:34.626923  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1123 08:43:34.626927  258086 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1123 08:43:34.626943  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1123 08:43:34.626974  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:34.627042  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:34.685224  258086 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1123 08:43:34.685246  258086 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1123 08:43:34.685326  258086 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 08:43:34.685340  258086 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1123 08:43:34.700613  258086 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1123 08:43:34.700688  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1123 08:43:34.713376  258086 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1123 08:43:34.713409  258086 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1123 08:43:34.713407  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1123 08:43:34.713434  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1123 08:43:34.840943  258086 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1123 08:43:34.885583  258086 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1123 08:43:34.885674  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1123 08:43:35.489785  258086 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1123 08:43:35.489853  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:36.097868  258086 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.212165923s)
	I1123 08:43:36.097898  258086 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1123 08:43:36.097915  258086 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1123 08:43:36.097931  258086 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1123 08:43:36.097957  258086 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:36.097992  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1123 08:43:36.098005  258086 ssh_runner.go:195] Run: which crictl
	I1123 08:43:37.105043  258086 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.007027025s)
	I1123 08:43:37.105070  258086 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1123 08:43:37.105098  258086 ssh_runner.go:235] Completed: which crictl: (1.007074313s)
	I1123 08:43:37.105104  258086 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 08:43:37.105153  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:37.105159  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 08:43:37.133915  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:33.230087  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:33.729573  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:34.229556  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:34.729739  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:35.229458  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:35.729622  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:36.229768  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:36.730508  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:37.229765  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:37.729788  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:38.229952  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:38.730333  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:39.229833  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:39.729862  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:40.229901  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:40.729885  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:41.230479  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:41.730515  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:42.230247  254114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:42.326336  254114 kubeadm.go:1114] duration metric: took 11.676850942s to wait for elevateKubeSystemPrivileges
	I1123 08:43:42.326376  254114 kubeadm.go:403] duration metric: took 21.509472133s to StartCluster
	I1123 08:43:42.326398  254114 settings.go:142] acquiring lock: {Name:mk2c00a8b461754a49d5c7fd5af34c7d1005153a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:42.326470  254114 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-13876/kubeconfig
	I1123 08:43:42.328223  254114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/kubeconfig: {Name:mk636046b7146fd65b5638a6d549b76e61f7f055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:42.328482  254114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:43:42.328500  254114 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:43:42.328566  254114 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:43:42.328729  254114 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-204346"
	I1123 08:43:42.328754  254114 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-204346"
	I1123 08:43:42.328778  254114 config.go:182] Loaded profile config "old-k8s-version-204346": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 08:43:42.328793  254114 host.go:66] Checking if "old-k8s-version-204346" exists ...
	I1123 08:43:42.328837  254114 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-204346"
	I1123 08:43:42.328856  254114 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-204346"
	I1123 08:43:42.329183  254114 cli_runner.go:164] Run: docker container inspect old-k8s-version-204346 --format={{.State.Status}}
	I1123 08:43:42.329321  254114 cli_runner.go:164] Run: docker container inspect old-k8s-version-204346 --format={{.State.Status}}
	I1123 08:43:42.331021  254114 out.go:179] * Verifying Kubernetes components...
	I1123 08:43:42.332482  254114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:43:42.357866  254114 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:38.827550  206485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:43:38.827977  206485 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:43:38.828023  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:43:38.828070  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:43:38.854573  206485 cri.go:89] found id: "630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:38.854598  206485 cri.go:89] found id: "89f5abdf45afb9ff15a0744d6b71c9196e67d8f1e07dbde6c14130fa812cd030"
	I1123 08:43:38.854603  206485 cri.go:89] found id: "fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:38.854606  206485 cri.go:89] found id: ""
	I1123 08:43:38.854613  206485 logs.go:282] 3 containers: [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391 89f5abdf45afb9ff15a0744d6b71c9196e67d8f1e07dbde6c14130fa812cd030 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3]
	I1123 08:43:38.854688  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:38.858901  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:38.862744  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:38.866475  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1123 08:43:38.866533  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:43:38.892493  206485 cri.go:89] found id: "044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:38.892520  206485 cri.go:89] found id: ""
	I1123 08:43:38.892528  206485 logs.go:282] 1 containers: [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1]
	I1123 08:43:38.892575  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:38.896728  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1123 08:43:38.896790  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:43:38.923307  206485 cri.go:89] found id: ""
	I1123 08:43:38.923331  206485 logs.go:282] 0 containers: []
	W1123 08:43:38.923340  206485 logs.go:284] No container was found matching "coredns"
	I1123 08:43:38.923346  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:43:38.923392  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:43:38.949371  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:38.949396  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:38.949401  206485 cri.go:89] found id: ""
	I1123 08:43:38.949407  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:43:38.949452  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:38.953461  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:38.957266  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:43:38.957315  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:43:38.984054  206485 cri.go:89] found id: ""
	I1123 08:43:38.984077  206485 logs.go:282] 0 containers: []
	W1123 08:43:38.984084  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:43:38.984090  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:43:38.984144  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:43:39.014867  206485 cri.go:89] found id: "7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb"
	I1123 08:43:39.014894  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:39.014900  206485 cri.go:89] found id: ""
	I1123 08:43:39.014909  206485 logs.go:282] 2 containers: [7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:43:39.014988  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:39.019876  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:39.024471  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:43:39.024545  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:43:39.056343  206485 cri.go:89] found id: ""
	I1123 08:43:39.056370  206485 logs.go:282] 0 containers: []
	W1123 08:43:39.056382  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:43:39.056390  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:43:39.056447  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:43:39.087173  206485 cri.go:89] found id: ""
	I1123 08:43:39.087200  206485 logs.go:282] 0 containers: []
	W1123 08:43:39.087209  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:43:39.087218  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:43:39.087230  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:39.143340  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:43:39.143373  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:39.182502  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:43:39.182538  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:39.220490  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:43:39.220526  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:43:39.279713  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:43:39.279751  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:43:39.296632  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:43:39.296672  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:43:39.369445  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:43:39.369477  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:43:39.369493  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:39.412743  206485 logs.go:123] Gathering logs for kube-controller-manager [7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb] ...
	I1123 08:43:39.412782  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb"
	I1123 08:43:39.445988  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:43:39.446015  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:43:39.482074  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:43:39.482110  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:43:39.578994  206485 logs.go:123] Gathering logs for kube-apiserver [89f5abdf45afb9ff15a0744d6b71c9196e67d8f1e07dbde6c14130fa812cd030] ...
	I1123 08:43:39.579036  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89f5abdf45afb9ff15a0744d6b71c9196e67d8f1e07dbde6c14130fa812cd030"
	I1123 08:43:39.619624  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:43:39.619684  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:39.661136  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:43:39.661175  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:42.204267  206485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:43:42.204712  206485 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:43:42.204771  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:43:42.204826  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:43:42.232709  206485 cri.go:89] found id: "630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:42.232730  206485 cri.go:89] found id: "fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:42.232735  206485 cri.go:89] found id: ""
	I1123 08:43:42.232744  206485 logs.go:282] 2 containers: [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3]
	I1123 08:43:42.232799  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:42.236622  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:42.240968  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1123 08:43:42.241028  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:43:42.281849  206485 cri.go:89] found id: "044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:42.281877  206485 cri.go:89] found id: ""
	I1123 08:43:42.281885  206485 logs.go:282] 1 containers: [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1]
	I1123 08:43:42.281942  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:42.287991  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1123 08:43:42.288063  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:43:42.327625  206485 cri.go:89] found id: ""
	I1123 08:43:42.327669  206485 logs.go:282] 0 containers: []
	W1123 08:43:42.327679  206485 logs.go:284] No container was found matching "coredns"
	I1123 08:43:42.327687  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:43:42.327768  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:43:39.015203  258086 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.910026064s)
	I1123 08:43:39.015228  258086 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1123 08:43:39.015249  258086 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1123 08:43:39.015286  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1123 08:43:39.015301  258086 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.881356677s)
	I1123 08:43:39.015367  258086 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:39.981839  258086 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1123 08:43:39.981862  258086 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1123 08:43:39.981901  258086 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 08:43:39.981948  258086 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1123 08:43:39.981955  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 08:43:39.985933  258086 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1123 08:43:39.985965  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1123 08:43:41.077380  258086 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.095406466s)
	I1123 08:43:41.077408  258086 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1123 08:43:41.077435  258086 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1123 08:43:41.077497  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1123 08:43:42.358205  254114 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-204346"
	I1123 08:43:42.358246  254114 host.go:66] Checking if "old-k8s-version-204346" exists ...
	I1123 08:43:42.358752  254114 cli_runner.go:164] Run: docker container inspect old-k8s-version-204346 --format={{.State.Status}}
	I1123 08:43:42.359206  254114 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:43:42.359225  254114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:43:42.359285  254114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-204346
	I1123 08:43:42.389614  254114 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:43:42.389635  254114 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:43:42.389707  254114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-204346
	I1123 08:43:42.391185  254114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/old-k8s-version-204346/id_rsa Username:docker}
	I1123 08:43:42.422459  254114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/old-k8s-version-204346/id_rsa Username:docker}
	I1123 08:43:42.449217  254114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:43:42.517611  254114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:43:42.534960  254114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:43:42.564953  254114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:43:42.780756  254114 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1123 08:43:42.781954  254114 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-204346" to be "Ready" ...
	I1123 08:43:43.034443  254114 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 08:43:43.035744  254114 addons.go:530] duration metric: took 707.164659ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:43:42.368955  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:42.368979  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:42.368985  206485 cri.go:89] found id: ""
	I1123 08:43:42.368996  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:43:42.370472  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:42.378043  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:42.388658  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:43:42.388749  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:43:42.429522  206485 cri.go:89] found id: ""
	I1123 08:43:42.429549  206485 logs.go:282] 0 containers: []
	W1123 08:43:42.429559  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:43:42.429566  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:43:42.429632  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:43:42.469043  206485 cri.go:89] found id: "5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:42.469070  206485 cri.go:89] found id: "7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb"
	I1123 08:43:42.469076  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:42.469081  206485 cri.go:89] found id: ""
	I1123 08:43:42.469089  206485 logs.go:282] 3 containers: [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b 7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:43:42.469144  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:42.475315  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:42.481874  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:42.488696  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:43:42.488921  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:43:42.533856  206485 cri.go:89] found id: ""
	I1123 08:43:42.533914  206485 logs.go:282] 0 containers: []
	W1123 08:43:42.533926  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:43:42.533934  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:43:42.534029  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:43:42.577521  206485 cri.go:89] found id: ""
	I1123 08:43:42.577543  206485 logs.go:282] 0 containers: []
	W1123 08:43:42.577550  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:43:42.577559  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:43:42.577568  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:43:42.665576  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:43:42.665601  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:43:42.665622  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:42.723908  206485 logs.go:123] Gathering logs for kube-controller-manager [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b] ...
	I1123 08:43:42.723945  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:42.766588  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:43:42.766618  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:42.815960  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:43:42.816050  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:43:42.836362  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:43:42.836393  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:42.883211  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:43:42.883249  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:42.925983  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:43:42.926057  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:43.002532  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:43:43.002565  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:43.048891  206485 logs.go:123] Gathering logs for kube-controller-manager [7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb] ...
	I1123 08:43:43.048923  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb"
	I1123 08:43:43.080573  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:43:43.080606  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:43:43.145471  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:43:43.145510  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:43:43.182994  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:43:43.183035  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:43:45.803715  206485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:43:45.804092  206485 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:43:45.804151  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:43:45.804211  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:43:45.842142  206485 cri.go:89] found id: "630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:45.842161  206485 cri.go:89] found id: "fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:45.842165  206485 cri.go:89] found id: ""
	I1123 08:43:45.842172  206485 logs.go:282] 2 containers: [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3]
	I1123 08:43:45.842223  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:45.846225  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:45.850730  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1123 08:43:45.850797  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:43:45.879479  206485 cri.go:89] found id: "044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:45.879506  206485 cri.go:89] found id: ""
	I1123 08:43:45.879515  206485 logs.go:282] 1 containers: [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1]
	I1123 08:43:45.879576  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:45.884738  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1123 08:43:45.884801  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:43:45.916040  206485 cri.go:89] found id: ""
	I1123 08:43:45.916069  206485 logs.go:282] 0 containers: []
	W1123 08:43:45.916080  206485 logs.go:284] No container was found matching "coredns"
	I1123 08:43:45.916088  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:43:45.916155  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:43:45.947206  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:45.947237  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:45.947242  206485 cri.go:89] found id: ""
	I1123 08:43:45.947252  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:43:45.947308  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:45.952246  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:45.956172  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:43:45.956233  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:43:45.986919  206485 cri.go:89] found id: ""
	I1123 08:43:45.986945  206485 logs.go:282] 0 containers: []
	W1123 08:43:45.986956  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:43:45.986964  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:43:45.987017  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:43:46.019241  206485 cri.go:89] found id: "5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:46.019269  206485 cri.go:89] found id: "7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb"
	I1123 08:43:46.019273  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:46.019278  206485 cri.go:89] found id: ""
	I1123 08:43:46.019286  206485 logs.go:282] 3 containers: [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b 7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:43:46.019345  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:46.024190  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:46.028847  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:46.033363  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:43:46.033436  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:43:46.067781  206485 cri.go:89] found id: ""
	I1123 08:43:46.067808  206485 logs.go:282] 0 containers: []
	W1123 08:43:46.067819  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:43:46.067827  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:43:46.067885  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:43:46.100053  206485 cri.go:89] found id: ""
	I1123 08:43:46.100084  206485 logs.go:282] 0 containers: []
	W1123 08:43:46.100094  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:43:46.100107  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:43:46.100122  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:46.146426  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:43:46.146456  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:43:46.208332  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:43:46.208375  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:43:46.247193  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:43:46.247229  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:43:46.264714  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:43:46.264742  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:43:46.336341  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:43:46.336363  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:43:46.336376  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:46.379827  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:43:46.379866  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:46.425899  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:43:46.425925  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:46.491769  206485 logs.go:123] Gathering logs for kube-controller-manager [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b] ...
	I1123 08:43:46.491805  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:46.523775  206485 logs.go:123] Gathering logs for kube-controller-manager [7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb] ...
	I1123 08:43:46.523805  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb"
	I1123 08:43:46.555025  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:43:46.555060  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:46.592667  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:43:46.592709  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:43:46.691047  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:43:46.691081  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:43.958800  258086 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (2.881269634s)
	I1123 08:43:43.958835  258086 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1123 08:43:43.958864  258086 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1123 08:43:43.958908  258086 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1123 08:43:44.336453  258086 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21969-13876/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1123 08:43:44.336514  258086 cache_images.go:125] Successfully loaded all cached images
	I1123 08:43:44.336522  258086 cache_images.go:94] duration metric: took 10.177118s to LoadCachedImages
	I1123 08:43:44.336535  258086 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1123 08:43:44.336675  258086 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-999106 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-999106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:43:44.336740  258086 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:43:44.361999  258086 cni.go:84] Creating CNI manager for ""
	I1123 08:43:44.362021  258086 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:43:44.362037  258086 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:43:44.362060  258086 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-999106 NodeName:no-preload-999106 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:43:44.362197  258086 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-999106"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:43:44.362266  258086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:43:44.371147  258086 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1123 08:43:44.371205  258086 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1123 08:43:44.379477  258086 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1123 08:43:44.379559  258086 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1123 08:43:44.379560  258086 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1123 08:43:44.379590  258086 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1123 08:43:44.384906  258086 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1123 08:43:44.384935  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1123 08:43:45.307760  258086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:43:45.321272  258086 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1123 08:43:45.325776  258086 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1123 08:43:45.325807  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1123 08:43:45.440984  258086 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1123 08:43:45.448490  258086 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1123 08:43:45.448546  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1123 08:43:45.718942  258086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:43:45.729752  258086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1123 08:43:45.746904  258086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:43:45.764606  258086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1123 08:43:45.779438  258086 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:43:45.783637  258086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:43:45.795787  258086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:43:45.901866  258086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:43:45.931680  258086 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106 for IP: 192.168.85.2
	I1123 08:43:45.931702  258086 certs.go:195] generating shared ca certs ...
	I1123 08:43:45.931722  258086 certs.go:227] acquiring lock for ca certs: {Name:mk376e2c25eb30d8b09b93cb4624441e819bcc8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:45.931883  258086 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-13876/.minikube/ca.key
	I1123 08:43:45.931922  258086 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-13876/.minikube/proxy-client-ca.key
	I1123 08:43:45.931931  258086 certs.go:257] generating profile certs ...
	I1123 08:43:45.932023  258086 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/client.key
	I1123 08:43:45.932046  258086 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/client.crt with IP's: []
	I1123 08:43:46.076820  258086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/client.crt ...
	I1123 08:43:46.076852  258086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/client.crt: {Name:mk264e21cffc1d235a0a5153e1f533874608a488 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:46.077062  258086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/client.key ...
	I1123 08:43:46.077094  258086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/client.key: {Name:mk09f5a31cd584eb4ea102a803f662bacda0e612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:46.077204  258086 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.key.ff765c4c
	I1123 08:43:46.077226  258086 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.crt.ff765c4c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 08:43:46.147038  258086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.crt.ff765c4c ...
	I1123 08:43:46.147076  258086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.crt.ff765c4c: {Name:mk2b60ecfaddc28f6e9e91bd0ff2b48be7ad7023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:46.147257  258086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.key.ff765c4c ...
	I1123 08:43:46.147277  258086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.key.ff765c4c: {Name:mk8ce7b23d7c04fba7d8d30f580f5ae25a8eaa1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:46.147393  258086 certs.go:382] copying /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.crt.ff765c4c -> /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.crt
	I1123 08:43:46.147504  258086 certs.go:386] copying /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.key.ff765c4c -> /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.key
	I1123 08:43:46.147597  258086 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/proxy-client.key
	I1123 08:43:46.147614  258086 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/proxy-client.crt with IP's: []
	I1123 08:43:46.188254  258086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/proxy-client.crt ...
	I1123 08:43:46.188285  258086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/proxy-client.crt: {Name:mkce831c55c8c6f96bdb743bd92d80212f28ceec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:46.188486  258086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/proxy-client.key ...
	I1123 08:43:46.188506  258086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/proxy-client.key: {Name:mk2b9a4c76ac3acf445fdcb1e14850de2c1a5507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:46.188762  258086 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/17442.pem (1338 bytes)
	W1123 08:43:46.188820  258086 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-13876/.minikube/certs/17442_empty.pem, impossibly tiny 0 bytes
	I1123 08:43:46.188836  258086 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:43:46.188874  258086 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:43:46.188907  258086 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:43:46.188942  258086 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/key.pem (1675 bytes)
	I1123 08:43:46.189009  258086 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/files/etc/ssl/certs/174422.pem (1708 bytes)
	I1123 08:43:46.189889  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:43:46.212738  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:43:46.235727  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:43:46.259309  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:43:46.282164  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 08:43:46.305443  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:43:46.328998  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:43:46.351947  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/no-preload-999106/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:43:46.375511  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:43:46.401909  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/certs/17442.pem --> /usr/share/ca-certificates/17442.pem (1338 bytes)
	I1123 08:43:46.424180  258086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/files/etc/ssl/certs/174422.pem --> /usr/share/ca-certificates/174422.pem (1708 bytes)
	I1123 08:43:46.445575  258086 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:43:46.461580  258086 ssh_runner.go:195] Run: openssl version
	I1123 08:43:46.468524  258086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:43:46.477534  258086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:43:46.482510  258086 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:11 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:43:46.482577  258086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:43:46.523991  258086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:43:46.535125  258086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17442.pem && ln -fs /usr/share/ca-certificates/17442.pem /etc/ssl/certs/17442.pem"
	I1123 08:43:46.546052  258086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17442.pem
	I1123 08:43:46.552569  258086 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:16 /usr/share/ca-certificates/17442.pem
	I1123 08:43:46.552702  258086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17442.pem
	I1123 08:43:46.600806  258086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17442.pem /etc/ssl/certs/51391683.0"
	I1123 08:43:46.610524  258086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/174422.pem && ln -fs /usr/share/ca-certificates/174422.pem /etc/ssl/certs/174422.pem"
	I1123 08:43:46.621451  258086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/174422.pem
	I1123 08:43:46.625905  258086 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:16 /usr/share/ca-certificates/174422.pem
	I1123 08:43:46.625966  258086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/174422.pem
	I1123 08:43:46.663055  258086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/174422.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:43:46.672614  258086 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:43:46.676799  258086 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:43:46.676865  258086 kubeadm.go:401] StartCluster: {Name:no-preload-999106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-999106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:43:46.676948  258086 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:43:46.677027  258086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:43:46.706515  258086 cri.go:89] found id: ""
	I1123 08:43:46.706599  258086 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:43:46.715791  258086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:43:46.725599  258086 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:43:46.725695  258086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:43:46.734727  258086 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:43:46.734752  258086 kubeadm.go:158] found existing configuration files:
	
	I1123 08:43:46.734794  258086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:43:46.743841  258086 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:43:46.743892  258086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:43:46.752521  258086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:43:46.761347  258086 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:43:46.761400  258086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:43:46.769196  258086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:43:46.777174  258086 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:43:46.777227  258086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:43:46.784869  258086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:43:46.793707  258086 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:43:46.793768  258086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:43:46.801586  258086 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:43:46.858285  258086 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 08:43:46.916186  258086 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:43:43.286172  254114 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-204346" context rescaled to 1 replicas
	W1123 08:43:44.785588  254114 node_ready.go:57] node "old-k8s-version-204346" has "Ready":"False" status (will retry)
	W1123 08:43:46.785746  254114 node_ready.go:57] node "old-k8s-version-204346" has "Ready":"False" status (will retry)
	I1123 08:43:49.228668  206485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:43:49.229070  206485 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:43:49.229121  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:43:49.229170  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:43:49.256973  206485 cri.go:89] found id: "630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:49.256994  206485 cri.go:89] found id: "fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:49.257000  206485 cri.go:89] found id: ""
	I1123 08:43:49.257008  206485 logs.go:282] 2 containers: [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3]
	I1123 08:43:49.257070  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:49.261237  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:49.264766  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1123 08:43:49.264830  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:43:49.290113  206485 cri.go:89] found id: "044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:49.290135  206485 cri.go:89] found id: ""
	I1123 08:43:49.290145  206485 logs.go:282] 1 containers: [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1]
	I1123 08:43:49.290199  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:49.293989  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1123 08:43:49.294053  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:43:49.320161  206485 cri.go:89] found id: ""
	I1123 08:43:49.320191  206485 logs.go:282] 0 containers: []
	W1123 08:43:49.320202  206485 logs.go:284] No container was found matching "coredns"
	I1123 08:43:49.320210  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:43:49.320264  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:43:49.347363  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:49.347384  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:49.347391  206485 cri.go:89] found id: ""
	I1123 08:43:49.347407  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:43:49.347464  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:49.351525  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:49.355374  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:43:49.355433  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:43:49.382984  206485 cri.go:89] found id: ""
	I1123 08:43:49.383010  206485 logs.go:282] 0 containers: []
	W1123 08:43:49.383020  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:43:49.383028  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:43:49.383086  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:43:49.409377  206485 cri.go:89] found id: "5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:49.409402  206485 cri.go:89] found id: "7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb"
	I1123 08:43:49.409408  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:49.409413  206485 cri.go:89] found id: ""
	I1123 08:43:49.409421  206485 logs.go:282] 3 containers: [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b 7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:43:49.409468  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:49.413850  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:49.417701  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:49.421307  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:43:49.421373  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:43:49.447409  206485 cri.go:89] found id: ""
	I1123 08:43:49.447433  206485 logs.go:282] 0 containers: []
	W1123 08:43:49.447444  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:43:49.447451  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:43:49.447512  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:43:49.474526  206485 cri.go:89] found id: ""
	I1123 08:43:49.474554  206485 logs.go:282] 0 containers: []
	W1123 08:43:49.474562  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:43:49.474572  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:43:49.474580  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:43:49.566947  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:43:49.566990  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:43:49.581192  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:43:49.581218  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:43:49.640574  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:43:49.640596  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:43:49.640610  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:49.676070  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:43:49.676097  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:49.710524  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:43:49.710555  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:49.785389  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:43:49.785422  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:49.819651  206485 logs.go:123] Gathering logs for kube-controller-manager [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b] ...
	I1123 08:43:49.819677  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:49.847192  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:43:49.847216  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:49.878622  206485 logs.go:123] Gathering logs for kube-controller-manager [7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb] ...
	I1123 08:43:49.878674  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7629f2a7eb00cde594bb5ce8d8a3080ec5e16484bb96c70953456b9ad4f543bb"
	I1123 08:43:49.904924  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:43:49.904958  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:49.937225  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:43:49.937252  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:43:49.987441  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:43:49.987483  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1123 08:43:49.285708  254114 node_ready.go:57] node "old-k8s-version-204346" has "Ready":"False" status (will retry)
	W1123 08:43:51.285827  254114 node_ready.go:57] node "old-k8s-version-204346" has "Ready":"False" status (will retry)
	I1123 08:43:56.990600  258086 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:43:56.990724  258086 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:43:56.990889  258086 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:43:56.990976  258086 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 08:43:56.991027  258086 kubeadm.go:319] OS: Linux
	I1123 08:43:56.991098  258086 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:43:56.991170  258086 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:43:56.991327  258086 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:43:56.991401  258086 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:43:56.991513  258086 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:43:56.991594  258086 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:43:56.991696  258086 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:43:56.991760  258086 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 08:43:56.991928  258086 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:43:56.992079  258086 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:43:56.992203  258086 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:43:56.992277  258086 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:43:56.993629  258086 out.go:252]   - Generating certificates and keys ...
	I1123 08:43:56.993773  258086 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:43:56.993882  258086 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:43:56.993978  258086 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:43:56.994054  258086 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:43:56.994139  258086 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:43:56.994210  258086 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:43:56.994287  258086 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:43:56.994448  258086 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-999106] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:43:56.994523  258086 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:43:56.994701  258086 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-999106] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:43:56.994808  258086 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:43:56.994907  258086 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:43:56.994974  258086 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:43:56.995052  258086 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:43:56.995136  258086 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:43:56.995230  258086 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:43:56.995314  258086 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:43:56.995407  258086 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:43:56.995507  258086 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:43:56.995596  258086 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:43:56.995670  258086 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:43:56.998197  258086 out.go:252]   - Booting up control plane ...
	I1123 08:43:56.998282  258086 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:43:56.998367  258086 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:43:56.998479  258086 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:43:56.998614  258086 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:43:56.998760  258086 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:43:56.998861  258086 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:43:56.998949  258086 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:43:56.998984  258086 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:43:56.999108  258086 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:43:56.999224  258086 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:43:56.999284  258086 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.962401ms
	I1123 08:43:56.999376  258086 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:43:56.999453  258086 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1123 08:43:56.999531  258086 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:43:56.999598  258086 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:43:56.999680  258086 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.69972236s
	I1123 08:43:56.999756  258086 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.979262438s
	I1123 08:43:56.999857  258086 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502236354s
	I1123 08:43:56.999983  258086 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:43:57.000181  258086 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:43:57.000269  258086 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:43:57.000528  258086 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-999106 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:43:57.000596  258086 kubeadm.go:319] [bootstrap-token] Using token: augmq1.wtvrtjusohbhz9fp
	I1123 08:43:57.002234  258086 out.go:252]   - Configuring RBAC rules ...
	I1123 08:43:57.002330  258086 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:43:57.002408  258086 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:43:57.002539  258086 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:43:57.002709  258086 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:43:57.002823  258086 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:43:57.002898  258086 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:43:57.003040  258086 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:43:57.003091  258086 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:43:57.003157  258086 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:43:57.003173  258086 kubeadm.go:319] 
	I1123 08:43:57.003224  258086 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:43:57.003229  258086 kubeadm.go:319] 
	I1123 08:43:57.003293  258086 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:43:57.003299  258086 kubeadm.go:319] 
	I1123 08:43:57.003325  258086 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:43:57.003380  258086 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:43:57.003424  258086 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:43:57.003429  258086 kubeadm.go:319] 
	I1123 08:43:57.003474  258086 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:43:57.003483  258086 kubeadm.go:319] 
	I1123 08:43:57.003523  258086 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:43:57.003529  258086 kubeadm.go:319] 
	I1123 08:43:57.003586  258086 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:43:57.003674  258086 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:43:57.003774  258086 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:43:57.003795  258086 kubeadm.go:319] 
	I1123 08:43:57.003914  258086 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:43:57.004021  258086 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:43:57.004031  258086 kubeadm.go:319] 
	I1123 08:43:57.004153  258086 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token augmq1.wtvrtjusohbhz9fp \
	I1123 08:43:57.004275  258086 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5c48a3b11504a9c7a5d242d913eadf6a5354a8cb06c9ffcf8385d22efb04d8fa \
	I1123 08:43:57.004298  258086 kubeadm.go:319] 	--control-plane 
	I1123 08:43:57.004302  258086 kubeadm.go:319] 
	I1123 08:43:57.004373  258086 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:43:57.004379  258086 kubeadm.go:319] 
	I1123 08:43:57.004452  258086 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token augmq1.wtvrtjusohbhz9fp \
	I1123 08:43:57.004563  258086 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5c48a3b11504a9c7a5d242d913eadf6a5354a8cb06c9ffcf8385d22efb04d8fa 
	I1123 08:43:57.004575  258086 cni.go:84] Creating CNI manager for ""
	I1123 08:43:57.004581  258086 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:43:57.007194  258086 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:43:52.520061  206485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:43:52.520694  206485 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:43:52.520747  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:43:52.520799  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:43:52.553943  206485 cri.go:89] found id: "630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:52.553969  206485 cri.go:89] found id: "fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:52.553975  206485 cri.go:89] found id: ""
	I1123 08:43:52.553983  206485 logs.go:282] 2 containers: [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3]
	I1123 08:43:52.554042  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:52.559842  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:52.565197  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1123 08:43:52.565266  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:43:52.601499  206485 cri.go:89] found id: "044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:52.601529  206485 cri.go:89] found id: ""
	I1123 08:43:52.601568  206485 logs.go:282] 1 containers: [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1]
	I1123 08:43:52.601621  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:52.606848  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1123 08:43:52.606925  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:43:52.645028  206485 cri.go:89] found id: ""
	I1123 08:43:52.645061  206485 logs.go:282] 0 containers: []
	W1123 08:43:52.645072  206485 logs.go:284] No container was found matching "coredns"
	I1123 08:43:52.645079  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:43:52.645139  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:43:52.681457  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:52.681484  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:52.681490  206485 cri.go:89] found id: ""
	I1123 08:43:52.681499  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:43:52.681557  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:52.686548  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:52.690588  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:43:52.690682  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:43:52.723180  206485 cri.go:89] found id: ""
	I1123 08:43:52.723208  206485 logs.go:282] 0 containers: []
	W1123 08:43:52.723217  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:43:52.723224  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:43:52.723287  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:43:52.756887  206485 cri.go:89] found id: "5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:52.756911  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:52.756921  206485 cri.go:89] found id: ""
	I1123 08:43:52.756929  206485 logs.go:282] 2 containers: [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:43:52.756985  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:52.761180  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:52.765188  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:43:52.765247  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:43:52.795290  206485 cri.go:89] found id: ""
	I1123 08:43:52.795319  206485 logs.go:282] 0 containers: []
	W1123 08:43:52.795329  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:43:52.795336  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:43:52.795395  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:43:52.822978  206485 cri.go:89] found id: ""
	I1123 08:43:52.823006  206485 logs.go:282] 0 containers: []
	W1123 08:43:52.823013  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:43:52.823022  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:43:52.823034  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:52.859205  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:43:52.859240  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:43:52.910295  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:43:52.910334  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:52.948004  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:43:52.948045  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:52.982700  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:43:52.982734  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:53.055592  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:43:53.055634  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:53.097286  206485 logs.go:123] Gathering logs for kube-controller-manager [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b] ...
	I1123 08:43:53.097327  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:53.133102  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:43:53.133146  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:43:53.170688  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:43:53.170722  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:43:53.281419  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:43:53.281464  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:43:53.298748  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:43:53.298777  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:43:53.373016  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:43:53.373040  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:43:53.373054  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:55.914776  206485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:43:55.915250  206485 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:43:55.915303  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:43:55.915351  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:43:55.943544  206485 cri.go:89] found id: "630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:55.943567  206485 cri.go:89] found id: "fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:55.943572  206485 cri.go:89] found id: ""
	I1123 08:43:55.943579  206485 logs.go:282] 2 containers: [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3]
	I1123 08:43:55.943622  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:55.948391  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:55.952924  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1123 08:43:55.952992  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:43:55.981407  206485 cri.go:89] found id: "044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:55.981431  206485 cri.go:89] found id: ""
	I1123 08:43:55.981441  206485 logs.go:282] 1 containers: [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1]
	I1123 08:43:55.981501  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:55.986304  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1123 08:43:55.986378  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:43:56.014167  206485 cri.go:89] found id: ""
	I1123 08:43:56.014192  206485 logs.go:282] 0 containers: []
	W1123 08:43:56.014200  206485 logs.go:284] No container was found matching "coredns"
	I1123 08:43:56.014206  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:43:56.014262  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:43:56.050121  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:56.050153  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:56.050160  206485 cri.go:89] found id: ""
	I1123 08:43:56.050170  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:43:56.050236  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:56.055306  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:56.059507  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:43:56.059586  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:43:56.092810  206485 cri.go:89] found id: ""
	I1123 08:43:56.092843  206485 logs.go:282] 0 containers: []
	W1123 08:43:56.092856  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:43:56.092864  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:43:56.092931  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:43:56.126845  206485 cri.go:89] found id: "5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:56.126869  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:56.126874  206485 cri.go:89] found id: ""
	I1123 08:43:56.126884  206485 logs.go:282] 2 containers: [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:43:56.126939  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:56.131943  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:56.135880  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:43:56.135945  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:43:56.163669  206485 cri.go:89] found id: ""
	I1123 08:43:56.163696  206485 logs.go:282] 0 containers: []
	W1123 08:43:56.163707  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:43:56.163714  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:43:56.163773  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:43:56.197602  206485 cri.go:89] found id: ""
	I1123 08:43:56.197638  206485 logs.go:282] 0 containers: []
	W1123 08:43:56.197660  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:43:56.197672  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:43:56.197689  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:56.238940  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:43:56.238981  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:56.288636  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:43:56.288691  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:56.324266  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:43:56.324299  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:43:56.378458  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:43:56.378498  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:43:56.417284  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:43:56.417313  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:43:56.509149  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:43:56.509182  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:43:56.523057  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:43:56.523082  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:43:56.583048  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:43:56.583074  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:43:56.583095  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:56.618320  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:43:56.618358  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:56.651682  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:43:56.651713  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:56.709657  206485 logs.go:123] Gathering logs for kube-controller-manager [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b] ...
	I1123 08:43:56.709694  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:57.008714  258086 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:43:57.013402  258086 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:43:57.013443  258086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:43:57.028881  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:43:57.253419  258086 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:43:57.253530  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:57.253599  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-999106 minikube.k8s.io/updated_at=2025_11_23T08_43_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=no-preload-999106 minikube.k8s.io/primary=true
	I1123 08:43:57.264168  258086 ops.go:34] apiserver oom_adj: -16
	I1123 08:43:57.330032  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1123 08:43:53.286319  254114 node_ready.go:57] node "old-k8s-version-204346" has "Ready":"False" status (will retry)
	W1123 08:43:55.786003  254114 node_ready.go:57] node "old-k8s-version-204346" has "Ready":"False" status (will retry)
	I1123 08:43:57.285411  254114 node_ready.go:49] node "old-k8s-version-204346" is "Ready"
	I1123 08:43:57.285445  254114 node_ready.go:38] duration metric: took 14.503433565s for node "old-k8s-version-204346" to be "Ready" ...
	I1123 08:43:57.285462  254114 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:43:57.285564  254114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:43:57.301686  254114 api_server.go:72] duration metric: took 14.973147695s to wait for apiserver process to appear ...
	I1123 08:43:57.301718  254114 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:43:57.301742  254114 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 08:43:57.306545  254114 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 08:43:57.308093  254114 api_server.go:141] control plane version: v1.28.0
	I1123 08:43:57.308124  254114 api_server.go:131] duration metric: took 6.398178ms to wait for apiserver health ...
	I1123 08:43:57.308135  254114 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:43:57.312486  254114 system_pods.go:59] 8 kube-system pods found
	I1123 08:43:57.312519  254114 system_pods.go:61] "coredns-5dd5756b68-2fdsv" [1c71e052-b3c2-4875-8aeb-7d724ee26e06] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:43:57.312525  254114 system_pods.go:61] "etcd-old-k8s-version-204346" [58cc20a4-23f1-4a5a-ba0a-03fadfc6df09] Running
	I1123 08:43:57.312530  254114 system_pods.go:61] "kindnet-q8xnm" [c3178adf-8eb3-4210-9674-fdda89d3317d] Running
	I1123 08:43:57.312539  254114 system_pods.go:61] "kube-apiserver-old-k8s-version-204346" [e63e828c-37a0-48ab-9413-932b3cde09cc] Running
	I1123 08:43:57.312542  254114 system_pods.go:61] "kube-controller-manager-old-k8s-version-204346" [bbaefdad-f8f3-4264-a467-5f75937de2a0] Running
	I1123 08:43:57.312546  254114 system_pods.go:61] "kube-proxy-tzq9b" [5d122719-2577-438f-bae7-72a1034f88ef] Running
	I1123 08:43:57.312548  254114 system_pods.go:61] "kube-scheduler-old-k8s-version-204346" [773bcc91-2553-4606-91ab-f32ec0ba3738] Running
	I1123 08:43:57.312553  254114 system_pods.go:61] "storage-provisioner" [372382d8-d23f-4e6d-89ae-8f2c9c46b6dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:43:57.312559  254114 system_pods.go:74] duration metric: took 4.418082ms to wait for pod list to return data ...
	I1123 08:43:57.312566  254114 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:43:57.315607  254114 default_sa.go:45] found service account: "default"
	I1123 08:43:57.315634  254114 default_sa.go:55] duration metric: took 3.061615ms for default service account to be created ...
	I1123 08:43:57.315674  254114 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:43:57.320602  254114 system_pods.go:86] 8 kube-system pods found
	I1123 08:43:57.320629  254114 system_pods.go:89] "coredns-5dd5756b68-2fdsv" [1c71e052-b3c2-4875-8aeb-7d724ee26e06] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:43:57.320634  254114 system_pods.go:89] "etcd-old-k8s-version-204346" [58cc20a4-23f1-4a5a-ba0a-03fadfc6df09] Running
	I1123 08:43:57.320639  254114 system_pods.go:89] "kindnet-q8xnm" [c3178adf-8eb3-4210-9674-fdda89d3317d] Running
	I1123 08:43:57.320657  254114 system_pods.go:89] "kube-apiserver-old-k8s-version-204346" [e63e828c-37a0-48ab-9413-932b3cde09cc] Running
	I1123 08:43:57.320663  254114 system_pods.go:89] "kube-controller-manager-old-k8s-version-204346" [bbaefdad-f8f3-4264-a467-5f75937de2a0] Running
	I1123 08:43:57.320668  254114 system_pods.go:89] "kube-proxy-tzq9b" [5d122719-2577-438f-bae7-72a1034f88ef] Running
	I1123 08:43:57.320673  254114 system_pods.go:89] "kube-scheduler-old-k8s-version-204346" [773bcc91-2553-4606-91ab-f32ec0ba3738] Running
	I1123 08:43:57.320679  254114 system_pods.go:89] "storage-provisioner" [372382d8-d23f-4e6d-89ae-8f2c9c46b6dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:43:57.320708  254114 retry.go:31] will retry after 281.398987ms: missing components: kube-dns
	I1123 08:43:57.607881  254114 system_pods.go:86] 8 kube-system pods found
	I1123 08:43:57.607919  254114 system_pods.go:89] "coredns-5dd5756b68-2fdsv" [1c71e052-b3c2-4875-8aeb-7d724ee26e06] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:43:57.607927  254114 system_pods.go:89] "etcd-old-k8s-version-204346" [58cc20a4-23f1-4a5a-ba0a-03fadfc6df09] Running
	I1123 08:43:57.607936  254114 system_pods.go:89] "kindnet-q8xnm" [c3178adf-8eb3-4210-9674-fdda89d3317d] Running
	I1123 08:43:57.607942  254114 system_pods.go:89] "kube-apiserver-old-k8s-version-204346" [e63e828c-37a0-48ab-9413-932b3cde09cc] Running
	I1123 08:43:57.607948  254114 system_pods.go:89] "kube-controller-manager-old-k8s-version-204346" [bbaefdad-f8f3-4264-a467-5f75937de2a0] Running
	I1123 08:43:57.607952  254114 system_pods.go:89] "kube-proxy-tzq9b" [5d122719-2577-438f-bae7-72a1034f88ef] Running
	I1123 08:43:57.607957  254114 system_pods.go:89] "kube-scheduler-old-k8s-version-204346" [773bcc91-2553-4606-91ab-f32ec0ba3738] Running
	I1123 08:43:57.607964  254114 system_pods.go:89] "storage-provisioner" [372382d8-d23f-4e6d-89ae-8f2c9c46b6dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:43:57.607991  254114 retry.go:31] will retry after 389.750642ms: missing components: kube-dns
	I1123 08:43:58.002207  254114 system_pods.go:86] 8 kube-system pods found
	I1123 08:43:58.002234  254114 system_pods.go:89] "coredns-5dd5756b68-2fdsv" [1c71e052-b3c2-4875-8aeb-7d724ee26e06] Running
	I1123 08:43:58.002240  254114 system_pods.go:89] "etcd-old-k8s-version-204346" [58cc20a4-23f1-4a5a-ba0a-03fadfc6df09] Running
	I1123 08:43:58.002249  254114 system_pods.go:89] "kindnet-q8xnm" [c3178adf-8eb3-4210-9674-fdda89d3317d] Running
	I1123 08:43:58.002253  254114 system_pods.go:89] "kube-apiserver-old-k8s-version-204346" [e63e828c-37a0-48ab-9413-932b3cde09cc] Running
	I1123 08:43:58.002257  254114 system_pods.go:89] "kube-controller-manager-old-k8s-version-204346" [bbaefdad-f8f3-4264-a467-5f75937de2a0] Running
	I1123 08:43:58.002261  254114 system_pods.go:89] "kube-proxy-tzq9b" [5d122719-2577-438f-bae7-72a1034f88ef] Running
	I1123 08:43:58.002264  254114 system_pods.go:89] "kube-scheduler-old-k8s-version-204346" [773bcc91-2553-4606-91ab-f32ec0ba3738] Running
	I1123 08:43:58.002267  254114 system_pods.go:89] "storage-provisioner" [372382d8-d23f-4e6d-89ae-8f2c9c46b6dc] Running
	I1123 08:43:58.002275  254114 system_pods.go:126] duration metric: took 686.59398ms to wait for k8s-apps to be running ...
	I1123 08:43:58.002285  254114 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:43:58.002331  254114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:43:58.016798  254114 system_svc.go:56] duration metric: took 14.504815ms WaitForService to wait for kubelet
	I1123 08:43:58.016829  254114 kubeadm.go:587] duration metric: took 15.688298138s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:43:58.016854  254114 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:43:58.021952  254114 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:43:58.021983  254114 node_conditions.go:123] node cpu capacity is 8
	I1123 08:43:58.022010  254114 node_conditions.go:105] duration metric: took 5.146561ms to run NodePressure ...
	I1123 08:43:58.022026  254114 start.go:242] waiting for startup goroutines ...
	I1123 08:43:58.022040  254114 start.go:247] waiting for cluster config update ...
	I1123 08:43:58.022056  254114 start.go:256] writing updated cluster config ...
	I1123 08:43:58.022354  254114 ssh_runner.go:195] Run: rm -f paused
	I1123 08:43:58.026482  254114 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:43:58.030783  254114 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-2fdsv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:58.035326  254114 pod_ready.go:94] pod "coredns-5dd5756b68-2fdsv" is "Ready"
	I1123 08:43:58.035351  254114 pod_ready.go:86] duration metric: took 4.542747ms for pod "coredns-5dd5756b68-2fdsv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:58.038155  254114 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-204346" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:58.042389  254114 pod_ready.go:94] pod "etcd-old-k8s-version-204346" is "Ready"
	I1123 08:43:58.042413  254114 pod_ready.go:86] duration metric: took 4.236026ms for pod "etcd-old-k8s-version-204346" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:58.045530  254114 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-204346" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:58.049686  254114 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-204346" is "Ready"
	I1123 08:43:58.049708  254114 pod_ready.go:86] duration metric: took 4.151976ms for pod "kube-apiserver-old-k8s-version-204346" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:58.052167  254114 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-204346" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:58.430619  254114 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-204346" is "Ready"
	I1123 08:43:58.430662  254114 pod_ready.go:86] duration metric: took 378.478321ms for pod "kube-controller-manager-old-k8s-version-204346" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:58.631434  254114 pod_ready.go:83] waiting for pod "kube-proxy-tzq9b" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:59.030458  254114 pod_ready.go:94] pod "kube-proxy-tzq9b" is "Ready"
	I1123 08:43:59.030484  254114 pod_ready.go:86] duration metric: took 399.024693ms for pod "kube-proxy-tzq9b" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:59.231371  254114 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-204346" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:59.630789  254114 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-204346" is "Ready"
	I1123 08:43:59.630824  254114 pod_ready.go:86] duration metric: took 399.424476ms for pod "kube-scheduler-old-k8s-version-204346" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:59.630840  254114 pod_ready.go:40] duration metric: took 1.604329749s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:43:59.682106  254114 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1123 08:43:59.683780  254114 out.go:203] 
	W1123 08:43:59.685129  254114 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 08:43:59.686407  254114 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 08:43:59.689781  254114 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-204346" cluster and "default" namespace by default
	I1123 08:43:59.237742  206485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:43:59.238210  206485 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:43:59.238271  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:43:59.238328  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:43:59.266168  206485 cri.go:89] found id: "630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:59.266191  206485 cri.go:89] found id: "fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:59.266197  206485 cri.go:89] found id: ""
	I1123 08:43:59.266205  206485 logs.go:282] 2 containers: [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3]
	I1123 08:43:59.266261  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:59.270518  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:59.274380  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1123 08:43:59.274439  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:43:59.301514  206485 cri.go:89] found id: "044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:59.301542  206485 cri.go:89] found id: ""
	I1123 08:43:59.301552  206485 logs.go:282] 1 containers: [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1]
	I1123 08:43:59.301612  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:59.305940  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1123 08:43:59.306010  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:43:59.332361  206485 cri.go:89] found id: ""
	I1123 08:43:59.332384  206485 logs.go:282] 0 containers: []
	W1123 08:43:59.332394  206485 logs.go:284] No container was found matching "coredns"
	I1123 08:43:59.332402  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:43:59.332453  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:43:59.360415  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:59.360515  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:59.360533  206485 cri.go:89] found id: ""
	I1123 08:43:59.360541  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:43:59.360600  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:59.364967  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:59.369350  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:43:59.369411  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:43:59.400932  206485 cri.go:89] found id: ""
	I1123 08:43:59.400960  206485 logs.go:282] 0 containers: []
	W1123 08:43:59.400971  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:43:59.400979  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:43:59.401039  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:43:59.426988  206485 cri.go:89] found id: "5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:59.427009  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:59.427013  206485 cri.go:89] found id: ""
	I1123 08:43:59.427019  206485 logs.go:282] 2 containers: [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:43:59.427065  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:59.431308  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:43:59.435139  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:43:59.435187  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:43:59.461062  206485 cri.go:89] found id: ""
	I1123 08:43:59.461089  206485 logs.go:282] 0 containers: []
	W1123 08:43:59.461098  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:43:59.461106  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:43:59.461156  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:43:59.487437  206485 cri.go:89] found id: ""
	I1123 08:43:59.487458  206485 logs.go:282] 0 containers: []
	W1123 08:43:59.487467  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:43:59.487476  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:43:59.487487  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:43:59.520087  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:43:59.520115  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:43:59.551620  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:43:59.551662  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:43:59.610836  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:43:59.610857  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:43:59.610875  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:43:59.647413  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:43:59.647458  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:43:59.686992  206485 logs.go:123] Gathering logs for kube-controller-manager [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b] ...
	I1123 08:43:59.687024  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:43:59.724084  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:43:59.724115  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:43:59.760830  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:43:59.760916  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:43:59.811485  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:43:59.811519  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:43:59.920592  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:43:59.920624  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:43:59.937635  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:43:59.937681  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:43:59.974909  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:43:59.974948  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:43:57.830451  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:58.330875  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:58.830628  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:59.330282  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:43:59.830162  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:00.330422  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:00.830950  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:01.330805  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:01.830841  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:02.330880  258086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:02.414724  258086 kubeadm.go:1114] duration metric: took 5.161257652s to wait for elevateKubeSystemPrivileges
	I1123 08:44:02.414756  258086 kubeadm.go:403] duration metric: took 15.737896165s to StartCluster
	I1123 08:44:02.414776  258086 settings.go:142] acquiring lock: {Name:mk2c00a8b461754a49d5c7fd5af34c7d1005153a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:02.414842  258086 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-13876/kubeconfig
	I1123 08:44:02.416821  258086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/kubeconfig: {Name:mk636046b7146fd65b5638a6d549b76e61f7f055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:02.417741  258086 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:44:02.417762  258086 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:44:02.417786  258086 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:44:02.417889  258086 addons.go:70] Setting storage-provisioner=true in profile "no-preload-999106"
	I1123 08:44:02.417910  258086 addons.go:239] Setting addon storage-provisioner=true in "no-preload-999106"
	I1123 08:44:02.417926  258086 addons.go:70] Setting default-storageclass=true in profile "no-preload-999106"
	I1123 08:44:02.417947  258086 config.go:182] Loaded profile config "no-preload-999106": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:44:02.417950  258086 host.go:66] Checking if "no-preload-999106" exists ...
	I1123 08:44:02.417952  258086 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-999106"
	I1123 08:44:02.418452  258086 cli_runner.go:164] Run: docker container inspect no-preload-999106 --format={{.State.Status}}
	I1123 08:44:02.418590  258086 cli_runner.go:164] Run: docker container inspect no-preload-999106 --format={{.State.Status}}
	I1123 08:44:02.419817  258086 out.go:179] * Verifying Kubernetes components...
	I1123 08:44:02.422556  258086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:02.448285  258086 addons.go:239] Setting addon default-storageclass=true in "no-preload-999106"
	I1123 08:44:02.448336  258086 host.go:66] Checking if "no-preload-999106" exists ...
	I1123 08:44:02.448496  258086 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:44:02.448879  258086 cli_runner.go:164] Run: docker container inspect no-preload-999106 --format={{.State.Status}}
	I1123 08:44:02.449866  258086 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:02.449888  258086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:44:02.449940  258086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-999106
	I1123 08:44:02.479849  258086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/no-preload-999106/id_rsa Username:docker}
	I1123 08:44:02.481186  258086 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:02.481210  258086 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:44:02.481267  258086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-999106
	I1123 08:44:02.506758  258086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/no-preload-999106/id_rsa Username:docker}
	I1123 08:44:02.518200  258086 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:44:02.581982  258086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:44:02.612639  258086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:02.629441  258086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:02.722551  258086 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 08:44:02.724186  258086 node_ready.go:35] waiting up to 6m0s for node "no-preload-999106" to be "Ready" ...
	I1123 08:44:02.952603  258086 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 08:44:02.531044  206485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:44:02.531451  206485 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:44:02.531515  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:44:02.531572  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:44:02.568683  206485 cri.go:89] found id: "630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:44:02.568716  206485 cri.go:89] found id: "fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:44:02.568723  206485 cri.go:89] found id: ""
	I1123 08:44:02.568732  206485 logs.go:282] 2 containers: [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3]
	I1123 08:44:02.568799  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:02.573171  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:02.577424  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1123 08:44:02.577582  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:44:02.618894  206485 cri.go:89] found id: "044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:44:02.618923  206485 cri.go:89] found id: ""
	I1123 08:44:02.618932  206485 logs.go:282] 1 containers: [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1]
	I1123 08:44:02.618987  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:02.624397  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1123 08:44:02.624456  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:44:02.659100  206485 cri.go:89] found id: ""
	I1123 08:44:02.659131  206485 logs.go:282] 0 containers: []
	W1123 08:44:02.659143  206485 logs.go:284] No container was found matching "coredns"
	I1123 08:44:02.659151  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:44:02.659213  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:44:02.694829  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:44:02.694848  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:44:02.694852  206485 cri.go:89] found id: ""
	I1123 08:44:02.694859  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:44:02.694907  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:02.700604  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:02.705763  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:44:02.705843  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:44:02.741480  206485 cri.go:89] found id: ""
	I1123 08:44:02.741510  206485 logs.go:282] 0 containers: []
	W1123 08:44:02.741523  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:44:02.741529  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:44:02.741595  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:44:02.778417  206485 cri.go:89] found id: "5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:44:02.778442  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:44:02.778448  206485 cri.go:89] found id: ""
	I1123 08:44:02.778456  206485 logs.go:282] 2 containers: [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:44:02.778518  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:02.784422  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:02.789717  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:44:02.789794  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:44:02.821165  206485 cri.go:89] found id: ""
	I1123 08:44:02.821194  206485 logs.go:282] 0 containers: []
	W1123 08:44:02.821205  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:44:02.821216  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:44:02.821271  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:44:02.852719  206485 cri.go:89] found id: ""
	I1123 08:44:02.852745  206485 logs.go:282] 0 containers: []
	W1123 08:44:02.852754  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:44:02.852766  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:44:02.852785  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:44:02.892590  206485 logs.go:123] Gathering logs for kube-controller-manager [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b] ...
	I1123 08:44:02.892629  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:44:02.926138  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:44:02.926174  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:44:02.962943  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:44:02.962982  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:44:02.999133  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:44:02.999165  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:44:03.103866  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:44:03.103901  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:44:03.118230  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:44:03.118258  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:44:03.152826  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:44:03.152853  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:44:03.207774  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:44:03.207809  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:44:03.255093  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:44:03.255135  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:44:03.316127  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:44:03.316156  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:44:03.316171  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:44:03.350816  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:44:03.350855  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:44:05.885724  206485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:44:05.886146  206485 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:44:05.886208  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:44:05.886271  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:44:05.912631  206485 cri.go:89] found id: "630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:44:05.912667  206485 cri.go:89] found id: "fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:44:05.912672  206485 cri.go:89] found id: ""
	I1123 08:44:05.912681  206485 logs.go:282] 2 containers: [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3]
	I1123 08:44:05.912736  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:05.916915  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:05.920714  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1123 08:44:05.920785  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:44:05.948197  206485 cri.go:89] found id: "044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:44:05.948226  206485 cri.go:89] found id: ""
	I1123 08:44:05.948237  206485 logs.go:282] 1 containers: [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1]
	I1123 08:44:05.948297  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:05.952344  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1123 08:44:05.952394  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:44:05.979281  206485 cri.go:89] found id: ""
	I1123 08:44:05.979302  206485 logs.go:282] 0 containers: []
	W1123 08:44:05.979309  206485 logs.go:284] No container was found matching "coredns"
	I1123 08:44:05.979315  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:44:05.979360  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:44:06.005748  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:44:06.005775  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:44:06.005781  206485 cri.go:89] found id: ""
	I1123 08:44:06.005790  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:44:06.005842  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:06.009813  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:06.013567  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:44:06.013631  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:44:06.040041  206485 cri.go:89] found id: ""
	I1123 08:44:06.040069  206485 logs.go:282] 0 containers: []
	W1123 08:44:06.040082  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:44:06.040090  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:44:06.040146  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:44:06.068400  206485 cri.go:89] found id: "5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:44:06.068423  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:44:06.068428  206485 cri.go:89] found id: ""
	I1123 08:44:06.068435  206485 logs.go:282] 2 containers: [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:44:06.068489  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:06.072472  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:06.076295  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:44:06.076354  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:44:06.102497  206485 cri.go:89] found id: ""
	I1123 08:44:06.102525  206485 logs.go:282] 0 containers: []
	W1123 08:44:06.102538  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:44:06.102546  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:44:06.102607  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:44:06.130104  206485 cri.go:89] found id: ""
	I1123 08:44:06.130125  206485 logs.go:282] 0 containers: []
	W1123 08:44:06.130132  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:44:06.130141  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:44:06.130150  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:44:06.219429  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:44:06.219465  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:44:06.278463  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:44:06.278491  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:44:06.278507  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:44:06.315308  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:44:06.315344  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:44:06.374595  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:44:06.374627  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:44:06.404338  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:44:06.404365  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:44:06.453101  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:44:06.453130  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:44:06.466457  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:44:06.466503  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:44:06.499235  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:44:06.499264  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:44:06.531782  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:44:06.531811  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:44:06.567190  206485 logs.go:123] Gathering logs for kube-controller-manager [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b] ...
	I1123 08:44:06.567225  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:44:06.595596  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:44:06.595626  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:44:02.953927  258086 addons.go:530] duration metric: took 536.142427ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:44:03.227564  258086 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-999106" context rescaled to 1 replicas
	W1123 08:44:04.727505  258086 node_ready.go:57] node "no-preload-999106" has "Ready":"False" status (will retry)
	W1123 08:44:07.227319  258086 node_ready.go:57] node "no-preload-999106" has "Ready":"False" status (will retry)
	I1123 08:44:09.129199  206485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:44:09.129705  206485 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:44:09.129766  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:44:09.129825  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:44:09.156517  206485 cri.go:89] found id: "630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:44:09.156541  206485 cri.go:89] found id: "fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:44:09.156546  206485 cri.go:89] found id: ""
	I1123 08:44:09.156553  206485 logs.go:282] 2 containers: [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3]
	I1123 08:44:09.156609  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:09.160731  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:09.164606  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1123 08:44:09.164701  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:44:09.190968  206485 cri.go:89] found id: "044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:44:09.190989  206485 cri.go:89] found id: ""
	I1123 08:44:09.190998  206485 logs.go:282] 1 containers: [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1]
	I1123 08:44:09.191055  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:09.195105  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1123 08:44:09.195172  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:44:09.222111  206485 cri.go:89] found id: ""
	I1123 08:44:09.222135  206485 logs.go:282] 0 containers: []
	W1123 08:44:09.222143  206485 logs.go:284] No container was found matching "coredns"
	I1123 08:44:09.222150  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:44:09.222208  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:44:09.249482  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:44:09.249504  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:44:09.249508  206485 cri.go:89] found id: ""
	I1123 08:44:09.249514  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:44:09.249571  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:09.253482  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:09.257347  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:44:09.257412  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:44:09.284419  206485 cri.go:89] found id: ""
	I1123 08:44:09.284442  206485 logs.go:282] 0 containers: []
	W1123 08:44:09.284455  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:44:09.284463  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:44:09.284516  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:44:09.310860  206485 cri.go:89] found id: "5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:44:09.310887  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:44:09.310893  206485 cri.go:89] found id: ""
	I1123 08:44:09.310902  206485 logs.go:282] 2 containers: [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:44:09.310958  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:09.315221  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:09.319027  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:44:09.319091  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:44:09.345532  206485 cri.go:89] found id: ""
	I1123 08:44:09.345557  206485 logs.go:282] 0 containers: []
	W1123 08:44:09.345568  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:44:09.345575  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:44:09.345656  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:44:09.373433  206485 cri.go:89] found id: ""
	I1123 08:44:09.373457  206485 logs.go:282] 0 containers: []
	W1123 08:44:09.373467  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:44:09.373478  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:44:09.373511  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:44:09.388342  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:44:09.388377  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:44:09.446418  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:44:09.446441  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:44:09.446457  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:44:09.480003  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:44:09.480036  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:44:09.520856  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:44:09.520887  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:44:09.580293  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:44:09.580334  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:44:09.614373  206485 logs.go:123] Gathering logs for kube-controller-manager [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b] ...
	I1123 08:44:09.614404  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:44:09.643177  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:44:09.643204  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:44:09.676566  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:44:09.676593  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:44:09.771524  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:44:09.771560  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:44:09.803272  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:44:09.803301  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:44:09.851726  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:44:09.851765  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	1357388ae0aa5       56cc512116c8f       10 seconds ago      Running             busybox                   0                   34632f38cdf63       busybox                                          default
	80475d9bc2771       ead0a4a53df89       15 seconds ago      Running             coredns                   0                   cd75a3dc79d90       coredns-5dd5756b68-2fdsv                         kube-system
	089b66b211cc0       6e38f40d628db       15 seconds ago      Running             storage-provisioner       0                   8489f4374b9ca       storage-provisioner                              kube-system
	39b3d72b0119b       409467f978b4a       26 seconds ago      Running             kindnet-cni               0                   4e7fe0b0a93a6       kindnet-q8xnm                                    kube-system
	ef4e4389e44ca       ea1030da44aa1       29 seconds ago      Running             kube-proxy                0                   5b9d69d308423       kube-proxy-tzq9b                                 kube-system
	0ef7f303a2ce3       f6f496300a2ae       47 seconds ago      Running             kube-scheduler            0                   2757f6f1f2847       kube-scheduler-old-k8s-version-204346            kube-system
	8f2985624466e       4be79c38a4bab       47 seconds ago      Running             kube-controller-manager   0                   7d13da4692cf0       kube-controller-manager-old-k8s-version-204346   kube-system
	328d012e2a9c6       bb5e0dde9054c       47 seconds ago      Running             kube-apiserver            0                   801b406a053e0       kube-apiserver-old-k8s-version-204346            kube-system
	09bd2ad51bcbe       73deb9a3f7025       47 seconds ago      Running             etcd                      0                   bd3a3ff71b569       etcd-old-k8s-version-204346                      kube-system
	
	
	==> containerd <==
	Nov 23 08:43:57 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:43:57.554367695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-2fdsv,Uid:1c71e052-b3c2-4875-8aeb-7d724ee26e06,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd75a3dc79d9055a439d60e0b8c3a0eaf0c09774664074c042478ddbd42d8ed7\""
	Nov 23 08:43:57 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:43:57.557881691Z" level=info msg="CreateContainer within sandbox \"cd75a3dc79d9055a439d60e0b8c3a0eaf0c09774664074c042478ddbd42d8ed7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 23 08:43:57 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:43:57.565420837Z" level=info msg="Container 80475d9bc2771a5b76c88ec3e691c3e9e026b5054aa1bbf27b0fd3499a79fd1a: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:43:57 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:43:57.572367270Z" level=info msg="CreateContainer within sandbox \"cd75a3dc79d9055a439d60e0b8c3a0eaf0c09774664074c042478ddbd42d8ed7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"80475d9bc2771a5b76c88ec3e691c3e9e026b5054aa1bbf27b0fd3499a79fd1a\""
	Nov 23 08:43:57 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:43:57.573105266Z" level=info msg="StartContainer for \"80475d9bc2771a5b76c88ec3e691c3e9e026b5054aa1bbf27b0fd3499a79fd1a\""
	Nov 23 08:43:57 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:43:57.573985605Z" level=info msg="connecting to shim 80475d9bc2771a5b76c88ec3e691c3e9e026b5054aa1bbf27b0fd3499a79fd1a" address="unix:///run/containerd/s/402875f21b0b7b033dcd7b3cca8f2720835d3f90418b17dd5f3df52485b09e0c" protocol=ttrpc version=3
	Nov 23 08:43:57 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:43:57.602588352Z" level=info msg="StartContainer for \"089b66b211cc086767c9fdf40aba06bcf7b4484c0976381a4bdf51afe2621f61\" returns successfully"
	Nov 23 08:43:57 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:43:57.630751490Z" level=info msg="StartContainer for \"80475d9bc2771a5b76c88ec3e691c3e9e026b5054aa1bbf27b0fd3499a79fd1a\" returns successfully"
	Nov 23 08:44:00 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:00.171495043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:85a1fcd5-ee10-4749-9dec-40efed82eb3e,Namespace:default,Attempt:0,}"
	Nov 23 08:44:00 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:00.210794452Z" level=info msg="connecting to shim 34632f38cdf63a655e8bb7d39dd15ba97b0a7a53c3d2190fc06701fde9c49996" address="unix:///run/containerd/s/9131634b5b9e099a09d55b33b67bba908aad637f11b87abf7ed2211b15f763a9" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:44:00 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:00.287286149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:85a1fcd5-ee10-4749-9dec-40efed82eb3e,Namespace:default,Attempt:0,} returns sandbox id \"34632f38cdf63a655e8bb7d39dd15ba97b0a7a53c3d2190fc06701fde9c49996\""
	Nov 23 08:44:00 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:00.289225870Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.394106458Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.394929355Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396643"
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.396449964Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.399611876Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.400256412Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.110984688s"
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.400309785Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.402701592Z" level=info msg="CreateContainer within sandbox \"34632f38cdf63a655e8bb7d39dd15ba97b0a7a53c3d2190fc06701fde9c49996\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.410744826Z" level=info msg="Container 1357388ae0aa594dabe5692b9f6c39afa871a26d6dd0b5809e1510839a986dd5: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.419870192Z" level=info msg="CreateContainer within sandbox \"34632f38cdf63a655e8bb7d39dd15ba97b0a7a53c3d2190fc06701fde9c49996\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"1357388ae0aa594dabe5692b9f6c39afa871a26d6dd0b5809e1510839a986dd5\""
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.421053047Z" level=info msg="StartContainer for \"1357388ae0aa594dabe5692b9f6c39afa871a26d6dd0b5809e1510839a986dd5\""
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.422071051Z" level=info msg="connecting to shim 1357388ae0aa594dabe5692b9f6c39afa871a26d6dd0b5809e1510839a986dd5" address="unix:///run/containerd/s/9131634b5b9e099a09d55b33b67bba908aad637f11b87abf7ed2211b15f763a9" protocol=ttrpc version=3
	Nov 23 08:44:02 old-k8s-version-204346 containerd[661]: time="2025-11-23T08:44:02.495260690Z" level=info msg="StartContainer for \"1357388ae0aa594dabe5692b9f6c39afa871a26d6dd0b5809e1510839a986dd5\" returns successfully"
	Nov 23 08:44:09 old-k8s-version-204346 containerd[661]: E1123 08:44:09.948064     661 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [80475d9bc2771a5b76c88ec3e691c3e9e026b5054aa1bbf27b0fd3499a79fd1a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38064 - 25011 "HINFO IN 3150570816276822377.3169321318277058455. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024835318s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-204346
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-204346
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=old-k8s-version-204346
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_43_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:43:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-204346
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:44:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:44:00 +0000   Sun, 23 Nov 2025 08:43:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:44:00 +0000   Sun, 23 Nov 2025 08:43:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:44:00 +0000   Sun, 23 Nov 2025 08:43:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:44:00 +0000   Sun, 23 Nov 2025 08:43:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-204346
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                ddf0e41b-1230-4041-b2b0-aca7ba0a6fe4
	  Boot ID:                    3bab2277-1db4-4284-9fcc-5d1d58e87eb4
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 coredns-5dd5756b68-2fdsv                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     31s
	  kube-system                 etcd-old-k8s-version-204346                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         44s
	  kube-system                 kindnet-q8xnm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-old-k8s-version-204346             250m (3%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-controller-manager-old-k8s-version-204346    200m (2%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-proxy-tzq9b                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-old-k8s-version-204346             100m (1%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29s                kube-proxy       
	  Normal  Starting                 49s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node old-k8s-version-204346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node old-k8s-version-204346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x7 over 49s)  kubelet          Node old-k8s-version-204346 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  49s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 44s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  44s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  44s                kubelet          Node old-k8s-version-204346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s                kubelet          Node old-k8s-version-204346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s                kubelet          Node old-k8s-version-204346 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s                node-controller  Node old-k8s-version-204346 event: Registered Node old-k8s-version-204346 in Controller
	  Normal  NodeReady                16s                kubelet          Node old-k8s-version-204346 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000998] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.395963] i8042: Warning: Keylock active
	[  +0.012075] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497035] block sda: the capability attribute has been deprecated.
	[  +0.088048] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.022581] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.308229] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [09bd2ad51bcbe3133715a0348c39fbd488688f92fdc757fef7b242366c6eb72b] <==
	{"level":"info","ts":"2025-11-23T08:43:25.072307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-11-23T08:43:25.072449Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-11-23T08:43:25.073769Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T08:43:25.074175Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T08:43:25.073803Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-11-23T08:43:25.074517Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-11-23T08:43:25.074362Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T08:43:25.459144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-23T08:43:25.459188Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-23T08:43:25.459233Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-11-23T08:43:25.459253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-11-23T08:43:25.459261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-11-23T08:43:25.459281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-11-23T08:43:25.459298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-11-23T08:43:25.460336Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-204346 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T08:43:25.460368Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:43:25.460352Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:43:25.460547Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T08:43:25.46207Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T08:43:25.460343Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:43:25.46151Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:43:25.462309Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:43:25.462347Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:43:25.461945Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T08:43:25.466791Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 08:44:13 up  1:26,  0 user,  load average: 2.68, 2.53, 1.78
	Linux old-k8s-version-204346 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [39b3d72b0119bcc6ecd6e57b170ea19f5592bba7f48f0436c996349c8ca348dd] <==
	I1123 08:43:46.866967       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:43:46.867287       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1123 08:43:46.867434       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:43:46.867454       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:43:46.867482       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:43:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:43:47.067711       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:43:47.067748       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:43:47.067760       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:43:47.067904       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:43:47.369355       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:43:47.369384       1 metrics.go:72] Registering metrics
	I1123 08:43:47.369441       1 controller.go:711] "Syncing nftables rules"
	I1123 08:43:57.076844       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 08:43:57.076915       1 main.go:301] handling current node
	I1123 08:44:07.068039       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 08:44:07.068093       1 main.go:301] handling current node
	
	
	==> kube-apiserver [328d012e2a9c60b89bce2737c3bcb6c1f31581c21f2a3f2969cf002ad66bc982] <==
	I1123 08:43:26.887380       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:43:26.887389       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:43:26.887641       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1123 08:43:26.887685       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1123 08:43:26.887980       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 08:43:26.888304       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1123 08:43:26.889201       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1123 08:43:26.889373       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 08:43:26.893730       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 08:43:27.092344       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:43:27.794220       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:43:27.798285       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:43:27.798301       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:43:28.278123       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:43:28.347605       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:43:28.396516       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:43:28.402119       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1123 08:43:28.403251       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 08:43:28.410689       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:43:28.846011       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 08:43:29.796332       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 08:43:29.808173       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:43:29.820075       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1123 08:43:42.454084       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1123 08:43:42.555727       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [8f2985624466e7aea2ab0922f065c597c0bfd5950e9a7d9af9278d532ea162aa] <==
	I1123 08:43:42.301940       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 08:43:42.313117       1 shared_informer.go:318] Caches are synced for endpoint
	I1123 08:43:42.320707       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 08:43:42.468731       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tzq9b"
	I1123 08:43:42.470032       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-q8xnm"
	I1123 08:43:42.562465       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1123 08:43:42.637391       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:43:42.693556       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:43:42.693596       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 08:43:42.710317       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-j49bt"
	I1123 08:43:42.720116       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-2fdsv"
	I1123 08:43:42.729591       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="167.450584ms"
	I1123 08:43:42.750029       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.070236ms"
	I1123 08:43:42.772635       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.530968ms"
	I1123 08:43:42.772808       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.9µs"
	I1123 08:43:42.817260       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1123 08:43:42.828181       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-j49bt"
	I1123 08:43:42.834660       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.534321ms"
	I1123 08:43:42.847353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.631926ms"
	I1123 08:43:42.847627       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="198.148µs"
	I1123 08:43:57.121773       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="186.5µs"
	I1123 08:43:57.150540       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.97µs"
	I1123 08:43:57.197693       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1123 08:43:57.981361       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.07769ms"
	I1123 08:43:57.981507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.031µs"
	
	
	==> kube-proxy [ef4e4389e44ca59002bc45aac4774894eff14408a6f6654c403f41a7f5ae9178] <==
	I1123 08:43:43.138692       1 server_others.go:69] "Using iptables proxy"
	I1123 08:43:43.148849       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1123 08:43:43.173806       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:43:43.177107       1 server_others.go:152] "Using iptables Proxier"
	I1123 08:43:43.177190       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 08:43:43.177209       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 08:43:43.177247       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 08:43:43.177554       1 server.go:846] "Version info" version="v1.28.0"
	I1123 08:43:43.177673       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:43:43.178478       1 config.go:188] "Starting service config controller"
	I1123 08:43:43.178510       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 08:43:43.179694       1 config.go:97] "Starting endpoint slice config controller"
	I1123 08:43:43.179818       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 08:43:43.180065       1 config.go:315] "Starting node config controller"
	I1123 08:43:43.180084       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 08:43:43.280364       1 shared_informer.go:318] Caches are synced for service config
	I1123 08:43:43.280485       1 shared_informer.go:318] Caches are synced for node config
	I1123 08:43:43.280575       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0ef7f303a2ce364a193b1c3a534acf3ce3197306c4c2cc9dd0d5717ae9adf953] <==
	W1123 08:43:26.854417       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1123 08:43:26.854437       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1123 08:43:26.854443       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1123 08:43:26.854473       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1123 08:43:26.854661       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1123 08:43:26.854686       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1123 08:43:26.854994       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1123 08:43:26.855027       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1123 08:43:27.681328       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1123 08:43:27.681369       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1123 08:43:27.807379       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1123 08:43:27.807413       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1123 08:43:27.818838       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1123 08:43:27.818882       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1123 08:43:27.819991       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1123 08:43:27.820027       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1123 08:43:27.871687       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1123 08:43:27.871733       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 08:43:27.919852       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1123 08:43:27.919895       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1123 08:43:28.036804       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1123 08:43:28.036839       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1123 08:43:28.055978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1123 08:43:28.056016       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1123 08:43:29.649311       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.141354    1529 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.142046    1529 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.476770    1529 topology_manager.go:215] "Topology Admit Handler" podUID="5d122719-2577-438f-bae7-72a1034f88ef" podNamespace="kube-system" podName="kube-proxy-tzq9b"
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.478900    1529 topology_manager.go:215] "Topology Admit Handler" podUID="c3178adf-8eb3-4210-9674-fdda89d3317d" podNamespace="kube-system" podName="kindnet-q8xnm"
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.651490    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksdwp\" (UniqueName: \"kubernetes.io/projected/5d122719-2577-438f-bae7-72a1034f88ef-kube-api-access-ksdwp\") pod \"kube-proxy-tzq9b\" (UID: \"5d122719-2577-438f-bae7-72a1034f88ef\") " pod="kube-system/kube-proxy-tzq9b"
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.651698    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3178adf-8eb3-4210-9674-fdda89d3317d-lib-modules\") pod \"kindnet-q8xnm\" (UID: \"c3178adf-8eb3-4210-9674-fdda89d3317d\") " pod="kube-system/kindnet-q8xnm"
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.651862    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d122719-2577-438f-bae7-72a1034f88ef-lib-modules\") pod \"kube-proxy-tzq9b\" (UID: \"5d122719-2577-438f-bae7-72a1034f88ef\") " pod="kube-system/kube-proxy-tzq9b"
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.651898    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c3178adf-8eb3-4210-9674-fdda89d3317d-cni-cfg\") pod \"kindnet-q8xnm\" (UID: \"c3178adf-8eb3-4210-9674-fdda89d3317d\") " pod="kube-system/kindnet-q8xnm"
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.651928    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3178adf-8eb3-4210-9674-fdda89d3317d-xtables-lock\") pod \"kindnet-q8xnm\" (UID: \"c3178adf-8eb3-4210-9674-fdda89d3317d\") " pod="kube-system/kindnet-q8xnm"
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.651960    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9ntt\" (UniqueName: \"kubernetes.io/projected/c3178adf-8eb3-4210-9674-fdda89d3317d-kube-api-access-m9ntt\") pod \"kindnet-q8xnm\" (UID: \"c3178adf-8eb3-4210-9674-fdda89d3317d\") " pod="kube-system/kindnet-q8xnm"
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.651992    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5d122719-2577-438f-bae7-72a1034f88ef-kube-proxy\") pod \"kube-proxy-tzq9b\" (UID: \"5d122719-2577-438f-bae7-72a1034f88ef\") " pod="kube-system/kube-proxy-tzq9b"
	Nov 23 08:43:42 old-k8s-version-204346 kubelet[1529]: I1123 08:43:42.652021    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d122719-2577-438f-bae7-72a1034f88ef-xtables-lock\") pod \"kube-proxy-tzq9b\" (UID: \"5d122719-2577-438f-bae7-72a1034f88ef\") " pod="kube-system/kube-proxy-tzq9b"
	Nov 23 08:43:46 old-k8s-version-204346 kubelet[1529]: I1123 08:43:46.940830    1529 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tzq9b" podStartSLOduration=4.940768474 podCreationTimestamp="2025-11-23 08:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:43:43.932316562 +0000 UTC m=+14.168739010" watchObservedRunningTime="2025-11-23 08:43:46.940768474 +0000 UTC m=+17.177190922"
	Nov 23 08:43:46 old-k8s-version-204346 kubelet[1529]: I1123 08:43:46.940988    1529 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-q8xnm" podStartSLOduration=1.718157541 podCreationTimestamp="2025-11-23 08:43:42 +0000 UTC" firstStartedPulling="2025-11-23 08:43:43.30687244 +0000 UTC m=+13.543294877" lastFinishedPulling="2025-11-23 08:43:46.52967151 +0000 UTC m=+16.766093948" observedRunningTime="2025-11-23 08:43:46.940594815 +0000 UTC m=+17.177017264" watchObservedRunningTime="2025-11-23 08:43:46.940956612 +0000 UTC m=+17.177379059"
	Nov 23 08:43:57 old-k8s-version-204346 kubelet[1529]: I1123 08:43:57.093693    1529 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 23 08:43:57 old-k8s-version-204346 kubelet[1529]: I1123 08:43:57.122486    1529 topology_manager.go:215] "Topology Admit Handler" podUID="1c71e052-b3c2-4875-8aeb-7d724ee26e06" podNamespace="kube-system" podName="coredns-5dd5756b68-2fdsv"
	Nov 23 08:43:57 old-k8s-version-204346 kubelet[1529]: I1123 08:43:57.122759    1529 topology_manager.go:215] "Topology Admit Handler" podUID="372382d8-d23f-4e6d-89ae-8f2c9c46b6dc" podNamespace="kube-system" podName="storage-provisioner"
	Nov 23 08:43:57 old-k8s-version-204346 kubelet[1529]: I1123 08:43:57.263400    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c71e052-b3c2-4875-8aeb-7d724ee26e06-config-volume\") pod \"coredns-5dd5756b68-2fdsv\" (UID: \"1c71e052-b3c2-4875-8aeb-7d724ee26e06\") " pod="kube-system/coredns-5dd5756b68-2fdsv"
	Nov 23 08:43:57 old-k8s-version-204346 kubelet[1529]: I1123 08:43:57.263464    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-474bl\" (UniqueName: \"kubernetes.io/projected/1c71e052-b3c2-4875-8aeb-7d724ee26e06-kube-api-access-474bl\") pod \"coredns-5dd5756b68-2fdsv\" (UID: \"1c71e052-b3c2-4875-8aeb-7d724ee26e06\") " pod="kube-system/coredns-5dd5756b68-2fdsv"
	Nov 23 08:43:57 old-k8s-version-204346 kubelet[1529]: I1123 08:43:57.263575    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/372382d8-d23f-4e6d-89ae-8f2c9c46b6dc-tmp\") pod \"storage-provisioner\" (UID: \"372382d8-d23f-4e6d-89ae-8f2c9c46b6dc\") " pod="kube-system/storage-provisioner"
	Nov 23 08:43:57 old-k8s-version-204346 kubelet[1529]: I1123 08:43:57.263625    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cbg7\" (UniqueName: \"kubernetes.io/projected/372382d8-d23f-4e6d-89ae-8f2c9c46b6dc-kube-api-access-2cbg7\") pod \"storage-provisioner\" (UID: \"372382d8-d23f-4e6d-89ae-8f2c9c46b6dc\") " pod="kube-system/storage-provisioner"
	Nov 23 08:43:57 old-k8s-version-204346 kubelet[1529]: I1123 08:43:57.963727    1529 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.963673229 podCreationTimestamp="2025-11-23 08:43:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:43:57.963551537 +0000 UTC m=+28.199973987" watchObservedRunningTime="2025-11-23 08:43:57.963673229 +0000 UTC m=+28.200095677"
	Nov 23 08:43:57 old-k8s-version-204346 kubelet[1529]: I1123 08:43:57.974383    1529 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-2fdsv" podStartSLOduration=15.974330092 podCreationTimestamp="2025-11-23 08:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:43:57.974110377 +0000 UTC m=+28.210532825" watchObservedRunningTime="2025-11-23 08:43:57.974330092 +0000 UTC m=+28.210752539"
	Nov 23 08:43:59 old-k8s-version-204346 kubelet[1529]: I1123 08:43:59.862724    1529 topology_manager.go:215] "Topology Admit Handler" podUID="85a1fcd5-ee10-4749-9dec-40efed82eb3e" podNamespace="default" podName="busybox"
	Nov 23 08:43:59 old-k8s-version-204346 kubelet[1529]: I1123 08:43:59.981400    1529 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdg6d\" (UniqueName: \"kubernetes.io/projected/85a1fcd5-ee10-4749-9dec-40efed82eb3e-kube-api-access-tdg6d\") pod \"busybox\" (UID: \"85a1fcd5-ee10-4749-9dec-40efed82eb3e\") " pod="default/busybox"
	
	
	==> storage-provisioner [089b66b211cc086767c9fdf40aba06bcf7b4484c0976381a4bdf51afe2621f61] <==
	I1123 08:43:57.613751       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:43:57.624633       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:43:57.624700       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 08:43:57.633950       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:43:57.634082       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a0771e73-2533-4e9a-bd83-ee78487b1f50", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-204346_bff6cf86-fcf0-4fe3-b85e-b85b2509b23f became leader
	I1123 08:43:57.634291       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-204346_bff6cf86-fcf0-4fe3-b85e-b85b2509b23f!
	I1123 08:43:57.734684       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-204346_bff6cf86-fcf0-4fe3-b85e-b85b2509b23f!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-204346 -n old-k8s-version-204346
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-204346 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (14.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-999106 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f39d2d0a-c018-4ceb-a70a-4746b4cb29b7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f39d2d0a-c018-4ceb-a70a-4746b4cb29b7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003749585s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-999106 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-999106
helpers_test.go:243: (dbg) docker inspect no-preload-999106:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ad2c2c077ca36dc23de2569ce0a9724810938016f25a9aef97d9597211e5b152",
	        "Created": "2025-11-23T08:43:28.421731875Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 258552,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:43:28.455871749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/ad2c2c077ca36dc23de2569ce0a9724810938016f25a9aef97d9597211e5b152/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ad2c2c077ca36dc23de2569ce0a9724810938016f25a9aef97d9597211e5b152/hostname",
	        "HostsPath": "/var/lib/docker/containers/ad2c2c077ca36dc23de2569ce0a9724810938016f25a9aef97d9597211e5b152/hosts",
	        "LogPath": "/var/lib/docker/containers/ad2c2c077ca36dc23de2569ce0a9724810938016f25a9aef97d9597211e5b152/ad2c2c077ca36dc23de2569ce0a9724810938016f25a9aef97d9597211e5b152-json.log",
	        "Name": "/no-preload-999106",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-999106:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-999106",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ad2c2c077ca36dc23de2569ce0a9724810938016f25a9aef97d9597211e5b152",
	                "LowerDir": "/var/lib/docker/overlay2/72ff6b6fef9c01d65514de74bb94f962337a1fb387163121a42044ee655dcc36-init/diff:/var/lib/docker/overlay2/ee04ca8b85d0dedeb02bd9a5189a59a7f53ca89a011d262a78df32fa43bf0598/diff",
	                "MergedDir": "/var/lib/docker/overlay2/72ff6b6fef9c01d65514de74bb94f962337a1fb387163121a42044ee655dcc36/merged",
	                "UpperDir": "/var/lib/docker/overlay2/72ff6b6fef9c01d65514de74bb94f962337a1fb387163121a42044ee655dcc36/diff",
	                "WorkDir": "/var/lib/docker/overlay2/72ff6b6fef9c01d65514de74bb94f962337a1fb387163121a42044ee655dcc36/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-999106",
	                "Source": "/var/lib/docker/volumes/no-preload-999106/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-999106",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-999106",
	                "name.minikube.sigs.k8s.io": "no-preload-999106",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fb27ae77f955366ae0d20e0ec0b777a08755986df40932cbf2eed2869d990c27",
	            "SandboxKey": "/var/run/docker/netns/fb27ae77f955",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-999106": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "99c05f963d3a40d0e5c08164681a744d92c5091accc0a4a9bccac6786eaf2906",
	                    "EndpointID": "3fabf91120f8744884c827bfa4e72409c884efbebda3fa228fe1c57b33694b13",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "42:6e:ad:4c:f6:5c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-999106",
	                        "ad2c2c077ca3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-999106 -n no-preload-999106
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-999106 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-999106 logs -n 25: (1.020891994s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ delete  │ -p force-systemd-env-352249                                                                                                                                                                                                                         │ force-systemd-env-352249  │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ start   │ -p cert-expiration-680868 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-680868    │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:42 UTC │
	│ ssh     │ force-systemd-flag-570956 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-570956 │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ delete  │ -p force-systemd-flag-570956                                                                                                                                                                                                                        │ force-systemd-flag-570956 │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ start   │ -p cert-options-194967 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-194967       │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:42 UTC │
	│ stop    │ -p NoKubernetes-846693                                                                                                                                                                                                                              │ NoKubernetes-846693       │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ start   │ -p NoKubernetes-846693 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-846693       │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:42 UTC │
	│ ssh     │ -p NoKubernetes-846693 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-846693       │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │                     │
	│ delete  │ -p NoKubernetes-846693                                                                                                                                                                                                                              │ NoKubernetes-846693       │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ start   │ -p missing-upgrade-231159 --memory=3072 --driver=docker  --container-runtime=containerd                                                                                                                                                             │ missing-upgrade-231159    │ jenkins │ v1.32.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ ssh     │ cert-options-194967 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-194967       │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ ssh     │ -p cert-options-194967 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-194967       │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ delete  │ -p cert-options-194967                                                                                                                                                                                                                              │ cert-options-194967       │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ start   │ -p stopped-upgrade-595653 --memory=3072 --vm-driver=docker  --container-runtime=containerd                                                                                                                                                          │ stopped-upgrade-595653    │ jenkins │ v1.32.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ start   │ -p missing-upgrade-231159 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                                                      │ missing-upgrade-231159    │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:43 UTC │
	│ stop    │ stopped-upgrade-595653 stop                                                                                                                                                                                                                         │ stopped-upgrade-595653    │ jenkins │ v1.32.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ start   │ -p stopped-upgrade-595653 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                                                      │ stopped-upgrade-595653    │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p stopped-upgrade-595653                                                                                                                                                                                                                           │ stopped-upgrade-595653    │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ start   │ -p old-k8s-version-204346 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-204346    │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p missing-upgrade-231159                                                                                                                                                                                                                           │ missing-upgrade-231159    │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ start   │ -p no-preload-999106 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-999106         │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-204346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-204346    │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ stop    │ -p old-k8s-version-204346 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-204346    │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-204346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-204346    │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p old-k8s-version-204346 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-204346    │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:44:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:44:27.061593  268762 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:44:27.061725  268762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:44:27.061736  268762 out.go:374] Setting ErrFile to fd 2...
	I1123 08:44:27.061743  268762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:44:27.061970  268762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
	I1123 08:44:27.062415  268762 out.go:368] Setting JSON to false
	I1123 08:44:27.063602  268762 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5208,"bootTime":1763882259,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:44:27.063664  268762 start.go:143] virtualization: kvm guest
	I1123 08:44:27.065752  268762 out.go:179] * [old-k8s-version-204346] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:44:27.067097  268762 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:44:27.067146  268762 notify.go:221] Checking for updates...
	I1123 08:44:27.069552  268762 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:44:27.070843  268762 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-13876/kubeconfig
	I1123 08:44:27.072061  268762 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-13876/.minikube
	I1123 08:44:27.073173  268762 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:44:27.074191  268762 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:44:27.075690  268762 config.go:182] Loaded profile config "old-k8s-version-204346": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 08:44:27.077256  268762 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1123 08:44:27.078172  268762 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:44:27.102115  268762 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:44:27.102190  268762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:44:27.159084  268762 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 08:44:27.149465588 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:44:27.159199  268762 docker.go:319] overlay module found
	I1123 08:44:27.160913  268762 out.go:179] * Using the docker driver based on existing profile
	I1123 08:44:27.162100  268762 start.go:309] selected driver: docker
	I1123 08:44:27.162115  268762 start.go:927] validating driver "docker" against &{Name:old-k8s-version-204346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-204346 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountSt
ring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:44:27.162190  268762 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:44:27.162772  268762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:44:27.224741  268762 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 08:44:27.214852266 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:44:27.225070  268762 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:44:27.225098  268762 cni.go:84] Creating CNI manager for ""
	I1123 08:44:27.225153  268762 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:44:27.225183  268762 start.go:353] cluster config:
	{Name:old-k8s-version-204346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-204346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:44:27.227162  268762 out.go:179] * Starting "old-k8s-version-204346" primary control-plane node in "old-k8s-version-204346" cluster
	I1123 08:44:27.228355  268762 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:44:27.229438  268762 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:44:27.230484  268762 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:44:27.230511  268762 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1123 08:44:27.230515  268762 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:44:27.230524  268762 cache.go:65] Caching tarball of preloaded images
	I1123 08:44:27.230622  268762 preload.go:238] Found /home/jenkins/minikube-integration/21969-13876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1123 08:44:27.230635  268762 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1123 08:44:27.230759  268762 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/old-k8s-version-204346/config.json ...
	I1123 08:44:27.251172  268762 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:44:27.251199  268762 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:44:27.251214  268762 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:44:27.251241  268762 start.go:360] acquireMachinesLock for old-k8s-version-204346: {Name:mkc8dfec607e9f2fe653f7594782b98ccf59083b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:27.251292  268762 start.go:364] duration metric: took 35.071µs to acquireMachinesLock for "old-k8s-version-204346"
	I1123 08:44:27.251317  268762 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:44:27.251324  268762 fix.go:54] fixHost starting: 
	I1123 08:44:27.251511  268762 cli_runner.go:164] Run: docker container inspect old-k8s-version-204346 --format={{.State.Status}}
	I1123 08:44:27.269325  268762 fix.go:112] recreateIfNeeded on old-k8s-version-204346: state=Stopped err=<nil>
	W1123 08:44:27.269353  268762 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 08:44:22.366328  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:44:22.366354  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:44:22.366360  206485 cri.go:89] found id: ""
	I1123 08:44:22.366369  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:44:22.366423  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:22.370838  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:22.374686  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:44:22.374755  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:44:22.401966  206485 cri.go:89] found id: ""
	I1123 08:44:22.401994  206485 logs.go:282] 0 containers: []
	W1123 08:44:22.402002  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:44:22.402008  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:44:22.402064  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:44:22.429417  206485 cri.go:89] found id: "5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:44:22.429441  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:44:22.429447  206485 cri.go:89] found id: ""
	I1123 08:44:22.429455  206485 logs.go:282] 2 containers: [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:44:22.429519  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:22.433831  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:22.438074  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:44:22.438148  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:44:22.465104  206485 cri.go:89] found id: ""
	I1123 08:44:22.465134  206485 logs.go:282] 0 containers: []
	W1123 08:44:22.465145  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:44:22.465153  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:44:22.465209  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:44:22.491961  206485 cri.go:89] found id: ""
	I1123 08:44:22.491983  206485 logs.go:282] 0 containers: []
	W1123 08:44:22.491991  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:44:22.492000  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:44:22.492011  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:44:22.505320  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:44:22.505348  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:44:22.539731  206485 logs.go:123] Gathering logs for kube-controller-manager [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b] ...
	I1123 08:44:22.539760  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:44:22.567009  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:44:22.567043  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:44:22.597917  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:44:22.597948  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:44:22.656054  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:44:22.656075  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:44:22.656091  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:44:22.687968  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:44:22.687997  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:44:22.723374  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:44:22.723406  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:44:22.757145  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:44:22.757174  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:44:22.812852  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:44:22.812884  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:44:22.858684  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:44:22.858714  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:44:22.889916  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:44:22.889943  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:44:25.480284  206485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:44:25.480769  206485 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:44:25.480816  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:44:25.480868  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:44:25.508127  206485 cri.go:89] found id: "630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:44:25.508152  206485 cri.go:89] found id: "fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:44:25.508158  206485 cri.go:89] found id: ""
	I1123 08:44:25.508166  206485 logs.go:282] 2 containers: [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3]
	I1123 08:44:25.508211  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:25.512252  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:25.516092  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1123 08:44:25.516151  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:44:25.543107  206485 cri.go:89] found id: "044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:44:25.543126  206485 cri.go:89] found id: ""
	I1123 08:44:25.543135  206485 logs.go:282] 1 containers: [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1]
	I1123 08:44:25.543184  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:25.547277  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1123 08:44:25.547341  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:44:25.572601  206485 cri.go:89] found id: ""
	I1123 08:44:25.572623  206485 logs.go:282] 0 containers: []
	W1123 08:44:25.572631  206485 logs.go:284] No container was found matching "coredns"
	I1123 08:44:25.572636  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:44:25.572724  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:44:25.598811  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:44:25.598835  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:44:25.598841  206485 cri.go:89] found id: ""
	I1123 08:44:25.598849  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:44:25.598903  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:25.603116  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:25.606887  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:44:25.606951  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:44:25.634048  206485 cri.go:89] found id: ""
	I1123 08:44:25.634081  206485 logs.go:282] 0 containers: []
	W1123 08:44:25.634092  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:44:25.634099  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:44:25.634159  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:44:25.660918  206485 cri.go:89] found id: "5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:44:25.660938  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:44:25.660943  206485 cri.go:89] found id: ""
	I1123 08:44:25.660953  206485 logs.go:282] 2 containers: [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:44:25.661009  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:25.665230  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:25.669495  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:44:25.669555  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:44:25.696223  206485 cri.go:89] found id: ""
	I1123 08:44:25.696254  206485 logs.go:282] 0 containers: []
	W1123 08:44:25.696266  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:44:25.696275  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:44:25.696330  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:44:25.723010  206485 cri.go:89] found id: ""
	I1123 08:44:25.723036  206485 logs.go:282] 0 containers: []
	W1123 08:44:25.723046  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:44:25.723059  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:44:25.723074  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:44:25.758369  206485 logs.go:123] Gathering logs for kube-controller-manager [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b] ...
	I1123 08:44:25.758401  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:44:25.785411  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:44:25.785440  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:44:25.877430  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:44:25.877466  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:44:25.910311  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:44:25.910338  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:44:25.944171  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:44:25.944200  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:44:25.975807  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:44:25.975840  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:44:26.026059  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:44:26.026095  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:44:26.057028  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:44:26.057053  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:44:26.070438  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:44:26.070465  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:44:26.127558  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:44:26.127581  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:44:26.127596  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:44:26.160547  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:44:26.160575  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f14a1cb2dafd9       56cc512116c8f       6 seconds ago       Running             busybox                   0                   a02c6bdf2359c       busybox                                     default
	63405bdae4894       6e38f40d628db       12 seconds ago      Running             storage-provisioner       0                   cfa3bb13bb673       storage-provisioner                         kube-system
	2cc89e7267ac7       52546a367cc9e       12 seconds ago      Running             coredns                   0                   2e5945031a228       coredns-66bc5c9577-4frmr                    kube-system
	711a6c6b3e3e1       409467f978b4a       23 seconds ago      Running             kindnet-cni               0                   56da3428753ff       kindnet-wkmxg                               kube-system
	578024ff344be       fc25172553d79       26 seconds ago      Running             kube-proxy                0                   0f04dc7d0f45d       kube-proxy-4775c                            kube-system
	a13994b985d61       c80c8dbafe7dd       36 seconds ago      Running             kube-controller-manager   0                   5c63f9a4fdb1b       kube-controller-manager-no-preload-999106   kube-system
	a4e26b13f04d9       7dd6aaa1717ab       36 seconds ago      Running             kube-scheduler            0                   1a08dd52d7ff6       kube-scheduler-no-preload-999106            kube-system
	50b37fae6e17a       c3994bc696102       36 seconds ago      Running             kube-apiserver            0                   234a3670fe47c       kube-apiserver-no-preload-999106            kube-system
	77aeaa21182e4       5f1f5298c888d       36 seconds ago      Running             etcd                      0                   1c54b1a9d7e47       etcd-no-preload-999106                      kube-system
	
	
	==> containerd <==
	Nov 23 08:44:16 no-preload-999106 containerd[667]: time="2025-11-23T08:44:16.604304240Z" level=info msg="Container 63405bdae48943f3fdfa5320602b28e1a4aa4128249e8068c3ff9f091f800ce7: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:44:16 no-preload-999106 containerd[667]: time="2025-11-23T08:44:16.608065888Z" level=info msg="CreateContainer within sandbox \"2e5945031a22840b743cd39c9cb17b6cd02819b542e1ccbe4ade44fa878d1cfa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2cc89e7267ac7630562f3769c91fe81185986fe21c675aca66b47e76b0b64159\""
	Nov 23 08:44:16 no-preload-999106 containerd[667]: time="2025-11-23T08:44:16.608626798Z" level=info msg="StartContainer for \"2cc89e7267ac7630562f3769c91fe81185986fe21c675aca66b47e76b0b64159\""
	Nov 23 08:44:16 no-preload-999106 containerd[667]: time="2025-11-23T08:44:16.609730021Z" level=info msg="connecting to shim 2cc89e7267ac7630562f3769c91fe81185986fe21c675aca66b47e76b0b64159" address="unix:///run/containerd/s/764aad2d543665de36e0fb9128b3b9bf2af86c8e6bc018d6ff79d28fedb038fe" protocol=ttrpc version=3
	Nov 23 08:44:16 no-preload-999106 containerd[667]: time="2025-11-23T08:44:16.613050918Z" level=info msg="CreateContainer within sandbox \"cfa3bb13bb673a245a2439846b845f290bcf2036baff1f4a12f4cee09a4188d2\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"63405bdae48943f3fdfa5320602b28e1a4aa4128249e8068c3ff9f091f800ce7\""
	Nov 23 08:44:16 no-preload-999106 containerd[667]: time="2025-11-23T08:44:16.613698877Z" level=info msg="StartContainer for \"63405bdae48943f3fdfa5320602b28e1a4aa4128249e8068c3ff9f091f800ce7\""
	Nov 23 08:44:16 no-preload-999106 containerd[667]: time="2025-11-23T08:44:16.614794700Z" level=info msg="connecting to shim 63405bdae48943f3fdfa5320602b28e1a4aa4128249e8068c3ff9f091f800ce7" address="unix:///run/containerd/s/a9eb7b146d3de1c9d88f4eb8d66b152b7db740ca40bae611f2b9d44cf21f906a" protocol=ttrpc version=3
	Nov 23 08:44:16 no-preload-999106 containerd[667]: time="2025-11-23T08:44:16.663903640Z" level=info msg="StartContainer for \"63405bdae48943f3fdfa5320602b28e1a4aa4128249e8068c3ff9f091f800ce7\" returns successfully"
	Nov 23 08:44:16 no-preload-999106 containerd[667]: time="2025-11-23T08:44:16.674801572Z" level=info msg="StartContainer for \"2cc89e7267ac7630562f3769c91fe81185986fe21c675aca66b47e76b0b64159\" returns successfully"
	Nov 23 08:44:19 no-preload-999106 containerd[667]: time="2025-11-23T08:44:19.908523668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f39d2d0a-c018-4ceb-a70a-4746b4cb29b7,Namespace:default,Attempt:0,}"
	Nov 23 08:44:19 no-preload-999106 containerd[667]: time="2025-11-23T08:44:19.949738728Z" level=info msg="connecting to shim a02c6bdf2359c79b2d471ca115d7b9846ea74efd46463fc1adb1b69683857285" address="unix:///run/containerd/s/e6f6ea797bcf5a0d8a997721aa800f35bbeb5dc4a189527ea46d82f6158dc06b" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:44:20 no-preload-999106 containerd[667]: time="2025-11-23T08:44:20.022770433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f39d2d0a-c018-4ceb-a70a-4746b4cb29b7,Namespace:default,Attempt:0,} returns sandbox id \"a02c6bdf2359c79b2d471ca115d7b9846ea74efd46463fc1adb1b69683857285\""
	Nov 23 08:44:20 no-preload-999106 containerd[667]: time="2025-11-23T08:44:20.024433308Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.202414719Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.203163573Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396644"
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.204120958Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.205812083Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.206181245Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.181708896s"
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.206214207Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.211334604Z" level=info msg="CreateContainer within sandbox \"a02c6bdf2359c79b2d471ca115d7b9846ea74efd46463fc1adb1b69683857285\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.219402208Z" level=info msg="Container f14a1cb2dafd9e7f055524ad5cf6df49c90aaf9227a06f57c9afdf1b85bf64a0: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.226170075Z" level=info msg="CreateContainer within sandbox \"a02c6bdf2359c79b2d471ca115d7b9846ea74efd46463fc1adb1b69683857285\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"f14a1cb2dafd9e7f055524ad5cf6df49c90aaf9227a06f57c9afdf1b85bf64a0\""
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.226867512Z" level=info msg="StartContainer for \"f14a1cb2dafd9e7f055524ad5cf6df49c90aaf9227a06f57c9afdf1b85bf64a0\""
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.227834015Z" level=info msg="connecting to shim f14a1cb2dafd9e7f055524ad5cf6df49c90aaf9227a06f57c9afdf1b85bf64a0" address="unix:///run/containerd/s/e6f6ea797bcf5a0d8a997721aa800f35bbeb5dc4a189527ea46d82f6158dc06b" protocol=ttrpc version=3
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.287319879Z" level=info msg="StartContainer for \"f14a1cb2dafd9e7f055524ad5cf6df49c90aaf9227a06f57c9afdf1b85bf64a0\" returns successfully"
	
	
	==> coredns [2cc89e7267ac7630562f3769c91fe81185986fe21c675aca66b47e76b0b64159] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46471 - 63279 "HINFO IN 5262590171811647033.5549675669037636196. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.113582469s
	
	
	==> describe nodes <==
	Name:               no-preload-999106
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-999106
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=no-preload-999106
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_43_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:43:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-999106
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:44:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:44:26 +0000   Sun, 23 Nov 2025 08:43:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:44:26 +0000   Sun, 23 Nov 2025 08:43:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:44:26 +0000   Sun, 23 Nov 2025 08:43:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:44:26 +0000   Sun, 23 Nov 2025 08:44:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-999106
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                513fe179-27e5-4ae7-826a-4786a28960de
	  Boot ID:                    3bab2277-1db4-4284-9fcc-5d1d58e87eb4
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-4frmr                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-no-preload-999106                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-wkmxg                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-999106             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-999106    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-4775c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-999106             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  32s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node no-preload-999106 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node no-preload-999106 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node no-preload-999106 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node no-preload-999106 event: Registered Node no-preload-999106 in Controller
	  Normal  NodeReady                12s   kubelet          Node no-preload-999106 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000998] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.395963] i8042: Warning: Keylock active
	[  +0.012075] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497035] block sda: the capability attribute has been deprecated.
	[  +0.088048] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.022581] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.308229] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [77aeaa21182e4928eee6a02a9cc9b49776b2b0847a46a97762af20e78cd0e209] <==
	{"level":"warn","ts":"2025-11-23T08:43:53.326595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.335740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.351807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.361769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.369577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.377265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.385504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.393894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.402014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.416786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.423527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.434035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.441325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.449091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.456194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.462618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.470053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.476949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.483521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.490271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.496838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.503963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.524213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.540163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.604152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44770","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:44:28 up  1:26,  0 user,  load average: 2.37, 2.47, 1.77
	Linux no-preload-999106 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [711a6c6b3e3e10b33da0eea19761c001ff7749f1323d476a25a86945a8958d6e] <==
	I1123 08:44:05.844554       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:44:05.844891       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:44:05.845021       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:44:05.845036       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:44:05.845055       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:44:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:44:06.045540       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:44:06.045632       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:44:06.045672       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:44:06.133122       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:44:06.446414       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:44:06.446441       1 metrics.go:72] Registering metrics
	I1123 08:44:06.446531       1 controller.go:711] "Syncing nftables rules"
	I1123 08:44:16.049759       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:44:16.049829       1 main.go:301] handling current node
	I1123 08:44:26.047791       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:44:26.047854       1 main.go:301] handling current node
	
	
	==> kube-apiserver [50b37fae6e17ae766a8008266e820179ef533d1d7d3eadca04a01f3868a49584] <==
	E1123 08:43:54.158526       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1123 08:43:54.206883       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:43:54.210795       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:43:54.210968       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:43:54.217329       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:43:54.217636       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:43:54.293120       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:43:55.008889       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:43:55.012788       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:43:55.012812       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:43:55.466543       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:43:55.501568       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:43:55.612582       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:43:55.619002       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 08:43:55.619995       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:43:55.624482       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:43:56.042573       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:43:56.391155       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:43:56.402802       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:43:56.412036       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:44:01.395890       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:01.400070       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:01.794686       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:44:01.894063       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 08:44:27.709782       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:56878: use of closed network connection
	
	
	==> kube-controller-manager [a13994b985d61282ea7193a0f310c727fce78ee0999e414f0d071cd87e65f853] <==
	I1123 08:44:01.014751       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 08:44:01.039726       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:44:01.039747       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:44:01.039755       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:44:01.040002       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 08:44:01.040785       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 08:44:01.040824       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 08:44:01.040862       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 08:44:01.040873       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 08:44:01.040896       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 08:44:01.040868       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:44:01.041022       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 08:44:01.041292       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 08:44:01.041608       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:44:01.041676       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 08:44:01.041714       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:44:01.042885       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 08:44:01.042928       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:44:01.042974       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 08:44:01.043145       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:44:01.046850       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:44:01.046860       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:44:01.046878       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:44:01.061450       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:44:20.994033       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [578024ff344be4a5fc7ab094a44f9bc79f4cccce12f99fa0d25f0324d44303b7] <==
	I1123 08:44:02.655868       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:44:02.724581       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:44:02.825497       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:44:02.825538       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 08:44:02.825630       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:44:02.850324       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:44:02.850383       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:44:02.856443       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:44:02.856917       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:44:02.856957       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:44:02.858534       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:44:02.858553       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:44:02.858584       1 config.go:200] "Starting service config controller"
	I1123 08:44:02.859142       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:44:02.858676       1 config.go:309] "Starting node config controller"
	I1123 08:44:02.859180       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:44:02.859186       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:44:02.858671       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:44:02.859194       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:44:02.959492       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:44:02.959516       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:44:02.959550       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a4e26b13f04d988d880cb437c2f2848d048b5790cf2383802d421e28a1de61fe] <==
	E1123 08:43:54.053444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:43:54.053284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:43:54.053481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:43:54.053366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:43:54.053603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:43:54.053693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:43:54.053728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:43:54.053745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:43:54.053772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:43:54.053801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:43:54.053841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:43:54.053894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:43:54.906479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:43:54.986264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:43:55.029504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:43:55.034470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:43:55.038553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:43:55.079288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:43:55.124526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:43:55.149590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:43:55.190050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:43:55.199094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:43:55.229165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:43:55.295785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1123 08:43:57.151080       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:43:57 no-preload-999106 kubelet[2177]: E1123 08:43:57.289086    2177 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-no-preload-999106\" already exists" pod="kube-system/kube-apiserver-no-preload-999106"
	Nov 23 08:43:57 no-preload-999106 kubelet[2177]: I1123 08:43:57.301546    2177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-999106" podStartSLOduration=2.301526127 podStartE2EDuration="2.301526127s" podCreationTimestamp="2025-11-23 08:43:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:43:57.301345592 +0000 UTC m=+1.134413322" watchObservedRunningTime="2025-11-23 08:43:57.301526127 +0000 UTC m=+1.134593858"
	Nov 23 08:43:57 no-preload-999106 kubelet[2177]: I1123 08:43:57.322366    2177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-999106" podStartSLOduration=1.3223489050000001 podStartE2EDuration="1.322348905s" podCreationTimestamp="2025-11-23 08:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:43:57.312159273 +0000 UTC m=+1.145227005" watchObservedRunningTime="2025-11-23 08:43:57.322348905 +0000 UTC m=+1.155416635"
	Nov 23 08:43:57 no-preload-999106 kubelet[2177]: I1123 08:43:57.332336    2177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-999106" podStartSLOduration=1.332314707 podStartE2EDuration="1.332314707s" podCreationTimestamp="2025-11-23 08:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:43:57.322465267 +0000 UTC m=+1.155532993" watchObservedRunningTime="2025-11-23 08:43:57.332314707 +0000 UTC m=+1.165382432"
	Nov 23 08:43:57 no-preload-999106 kubelet[2177]: I1123 08:43:57.332456    2177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-999106" podStartSLOduration=1.332451238 podStartE2EDuration="1.332451238s" podCreationTimestamp="2025-11-23 08:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:43:57.332238714 +0000 UTC m=+1.165306446" watchObservedRunningTime="2025-11-23 08:43:57.332451238 +0000 UTC m=+1.165518968"
	Nov 23 08:44:01 no-preload-999106 kubelet[2177]: I1123 08:44:01.106211    2177 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:44:01 no-preload-999106 kubelet[2177]: I1123 08:44:01.106883    2177 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:44:01 no-preload-999106 kubelet[2177]: I1123 08:44:01.979903    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8a4de139-3851-46a4-b057-5e61880bd43f-kube-proxy\") pod \"kube-proxy-4775c\" (UID: \"8a4de139-3851-46a4-b057-5e61880bd43f\") " pod="kube-system/kube-proxy-4775c"
	Nov 23 08:44:01 no-preload-999106 kubelet[2177]: I1123 08:44:01.979959    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb8pv\" (UniqueName: \"kubernetes.io/projected/f7f0591c-04e2-4301-b210-21fd2cfa2614-kube-api-access-xb8pv\") pod \"kindnet-wkmxg\" (UID: \"f7f0591c-04e2-4301-b210-21fd2cfa2614\") " pod="kube-system/kindnet-wkmxg"
	Nov 23 08:44:01 no-preload-999106 kubelet[2177]: I1123 08:44:01.979979    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a4de139-3851-46a4-b057-5e61880bd43f-xtables-lock\") pod \"kube-proxy-4775c\" (UID: \"8a4de139-3851-46a4-b057-5e61880bd43f\") " pod="kube-system/kube-proxy-4775c"
	Nov 23 08:44:01 no-preload-999106 kubelet[2177]: I1123 08:44:01.979998    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdlnb\" (UniqueName: \"kubernetes.io/projected/8a4de139-3851-46a4-b057-5e61880bd43f-kube-api-access-gdlnb\") pod \"kube-proxy-4775c\" (UID: \"8a4de139-3851-46a4-b057-5e61880bd43f\") " pod="kube-system/kube-proxy-4775c"
	Nov 23 08:44:01 no-preload-999106 kubelet[2177]: I1123 08:44:01.980012    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7f0591c-04e2-4301-b210-21fd2cfa2614-xtables-lock\") pod \"kindnet-wkmxg\" (UID: \"f7f0591c-04e2-4301-b210-21fd2cfa2614\") " pod="kube-system/kindnet-wkmxg"
	Nov 23 08:44:01 no-preload-999106 kubelet[2177]: I1123 08:44:01.980027    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7f0591c-04e2-4301-b210-21fd2cfa2614-lib-modules\") pod \"kindnet-wkmxg\" (UID: \"f7f0591c-04e2-4301-b210-21fd2cfa2614\") " pod="kube-system/kindnet-wkmxg"
	Nov 23 08:44:01 no-preload-999106 kubelet[2177]: I1123 08:44:01.980076    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f7f0591c-04e2-4301-b210-21fd2cfa2614-cni-cfg\") pod \"kindnet-wkmxg\" (UID: \"f7f0591c-04e2-4301-b210-21fd2cfa2614\") " pod="kube-system/kindnet-wkmxg"
	Nov 23 08:44:01 no-preload-999106 kubelet[2177]: I1123 08:44:01.980151    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a4de139-3851-46a4-b057-5e61880bd43f-lib-modules\") pod \"kube-proxy-4775c\" (UID: \"8a4de139-3851-46a4-b057-5e61880bd43f\") " pod="kube-system/kube-proxy-4775c"
	Nov 23 08:44:03 no-preload-999106 kubelet[2177]: I1123 08:44:03.445710    2177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4775c" podStartSLOduration=2.445677445 podStartE2EDuration="2.445677445s" podCreationTimestamp="2025-11-23 08:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:03.304148929 +0000 UTC m=+7.137216658" watchObservedRunningTime="2025-11-23 08:44:03.445677445 +0000 UTC m=+7.278745175"
	Nov 23 08:44:06 no-preload-999106 kubelet[2177]: I1123 08:44:06.313121    2177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-wkmxg" podStartSLOduration=2.4096422459999998 podStartE2EDuration="5.313102245s" podCreationTimestamp="2025-11-23 08:44:01 +0000 UTC" firstStartedPulling="2025-11-23 08:44:02.602852046 +0000 UTC m=+6.435919770" lastFinishedPulling="2025-11-23 08:44:05.506312043 +0000 UTC m=+9.339379769" observedRunningTime="2025-11-23 08:44:06.313099577 +0000 UTC m=+10.146167307" watchObservedRunningTime="2025-11-23 08:44:06.313102245 +0000 UTC m=+10.146169974"
	Nov 23 08:44:16 no-preload-999106 kubelet[2177]: I1123 08:44:16.125364    2177 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:44:16 no-preload-999106 kubelet[2177]: I1123 08:44:16.281861    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b2cm\" (UniqueName: \"kubernetes.io/projected/9305ab6d-7709-40d0-a8a3-d64dda164119-kube-api-access-5b2cm\") pod \"coredns-66bc5c9577-4frmr\" (UID: \"9305ab6d-7709-40d0-a8a3-d64dda164119\") " pod="kube-system/coredns-66bc5c9577-4frmr"
	Nov 23 08:44:16 no-preload-999106 kubelet[2177]: I1123 08:44:16.281935    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/328b82cc-8bf7-46bc-bab9-f254c0716802-tmp\") pod \"storage-provisioner\" (UID: \"328b82cc-8bf7-46bc-bab9-f254c0716802\") " pod="kube-system/storage-provisioner"
	Nov 23 08:44:16 no-preload-999106 kubelet[2177]: I1123 08:44:16.281963    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96kvf\" (UniqueName: \"kubernetes.io/projected/328b82cc-8bf7-46bc-bab9-f254c0716802-kube-api-access-96kvf\") pod \"storage-provisioner\" (UID: \"328b82cc-8bf7-46bc-bab9-f254c0716802\") " pod="kube-system/storage-provisioner"
	Nov 23 08:44:16 no-preload-999106 kubelet[2177]: I1123 08:44:16.281986    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9305ab6d-7709-40d0-a8a3-d64dda164119-config-volume\") pod \"coredns-66bc5c9577-4frmr\" (UID: \"9305ab6d-7709-40d0-a8a3-d64dda164119\") " pod="kube-system/coredns-66bc5c9577-4frmr"
	Nov 23 08:44:17 no-preload-999106 kubelet[2177]: I1123 08:44:17.338035    2177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.338015855 podStartE2EDuration="15.338015855s" podCreationTimestamp="2025-11-23 08:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:17.33797982 +0000 UTC m=+21.171047550" watchObservedRunningTime="2025-11-23 08:44:17.338015855 +0000 UTC m=+21.171083585"
	Nov 23 08:44:17 no-preload-999106 kubelet[2177]: I1123 08:44:17.349102    2177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4frmr" podStartSLOduration=15.349084852 podStartE2EDuration="15.349084852s" podCreationTimestamp="2025-11-23 08:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:17.348846652 +0000 UTC m=+21.181914381" watchObservedRunningTime="2025-11-23 08:44:17.349084852 +0000 UTC m=+21.182152584"
	Nov 23 08:44:19 no-preload-999106 kubelet[2177]: I1123 08:44:19.703995    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkk28\" (UniqueName: \"kubernetes.io/projected/f39d2d0a-c018-4ceb-a70a-4746b4cb29b7-kube-api-access-fkk28\") pod \"busybox\" (UID: \"f39d2d0a-c018-4ceb-a70a-4746b4cb29b7\") " pod="default/busybox"
	
	
	==> storage-provisioner [63405bdae48943f3fdfa5320602b28e1a4aa4128249e8068c3ff9f091f800ce7] <==
	I1123 08:44:16.674767       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:44:16.683745       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:44:16.683801       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:44:16.686626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:16.692338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:44:16.692589       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:44:16.692807       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-999106_d8c422c1-340f-45cf-95c6-b29dd02d7ad7!
	I1123 08:44:16.692739       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"68a4bd73-1962-40e8-b78c-3faa8bee8f62", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-999106_d8c422c1-340f-45cf-95c6-b29dd02d7ad7 became leader
	W1123 08:44:16.695144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:16.699472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:44:16.793694       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-999106_d8c422c1-340f-45cf-95c6-b29dd02d7ad7!
	W1123 08:44:18.703063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:18.707134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:20.710228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:20.715772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:22.719315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:22.723039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:24.726425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:24.731256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:26.735101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:26.740812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:28.744472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:28.748585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-999106 -n no-preload-999106
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-999106 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-999106
helpers_test.go:243: (dbg) docker inspect no-preload-999106:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ad2c2c077ca36dc23de2569ce0a9724810938016f25a9aef97d9597211e5b152",
	        "Created": "2025-11-23T08:43:28.421731875Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 258552,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:43:28.455871749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/ad2c2c077ca36dc23de2569ce0a9724810938016f25a9aef97d9597211e5b152/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ad2c2c077ca36dc23de2569ce0a9724810938016f25a9aef97d9597211e5b152/hostname",
	        "HostsPath": "/var/lib/docker/containers/ad2c2c077ca36dc23de2569ce0a9724810938016f25a9aef97d9597211e5b152/hosts",
	        "LogPath": "/var/lib/docker/containers/ad2c2c077ca36dc23de2569ce0a9724810938016f25a9aef97d9597211e5b152/ad2c2c077ca36dc23de2569ce0a9724810938016f25a9aef97d9597211e5b152-json.log",
	        "Name": "/no-preload-999106",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-999106:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-999106",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ad2c2c077ca36dc23de2569ce0a9724810938016f25a9aef97d9597211e5b152",
	                "LowerDir": "/var/lib/docker/overlay2/72ff6b6fef9c01d65514de74bb94f962337a1fb387163121a42044ee655dcc36-init/diff:/var/lib/docker/overlay2/ee04ca8b85d0dedeb02bd9a5189a59a7f53ca89a011d262a78df32fa43bf0598/diff",
	                "MergedDir": "/var/lib/docker/overlay2/72ff6b6fef9c01d65514de74bb94f962337a1fb387163121a42044ee655dcc36/merged",
	                "UpperDir": "/var/lib/docker/overlay2/72ff6b6fef9c01d65514de74bb94f962337a1fb387163121a42044ee655dcc36/diff",
	                "WorkDir": "/var/lib/docker/overlay2/72ff6b6fef9c01d65514de74bb94f962337a1fb387163121a42044ee655dcc36/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-999106",
	                "Source": "/var/lib/docker/volumes/no-preload-999106/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-999106",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-999106",
	                "name.minikube.sigs.k8s.io": "no-preload-999106",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fb27ae77f955366ae0d20e0ec0b777a08755986df40932cbf2eed2869d990c27",
	            "SandboxKey": "/var/run/docker/netns/fb27ae77f955",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-999106": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "99c05f963d3a40d0e5c08164681a744d92c5091accc0a4a9bccac6786eaf2906",
	                    "EndpointID": "3fabf91120f8744884c827bfa4e72409c884efbebda3fa228fe1c57b33694b13",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "42:6e:ad:4c:f6:5c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-999106",
	                        "ad2c2c077ca3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-999106 -n no-preload-999106
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-999106 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ delete  │ -p force-systemd-env-352249                                                                                                                                                                                                                         │ force-systemd-env-352249  │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ start   │ -p cert-expiration-680868 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-680868    │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:42 UTC │
	│ ssh     │ force-systemd-flag-570956 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-570956 │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ delete  │ -p force-systemd-flag-570956                                                                                                                                                                                                                        │ force-systemd-flag-570956 │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ start   │ -p cert-options-194967 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-194967       │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:42 UTC │
	│ stop    │ -p NoKubernetes-846693                                                                                                                                                                                                                              │ NoKubernetes-846693       │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:41 UTC │
	│ start   │ -p NoKubernetes-846693 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-846693       │ jenkins │ v1.37.0 │ 23 Nov 25 08:41 UTC │ 23 Nov 25 08:42 UTC │
	│ ssh     │ -p NoKubernetes-846693 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-846693       │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │                     │
	│ delete  │ -p NoKubernetes-846693                                                                                                                                                                                                                              │ NoKubernetes-846693       │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ start   │ -p missing-upgrade-231159 --memory=3072 --driver=docker  --container-runtime=containerd                                                                                                                                                             │ missing-upgrade-231159    │ jenkins │ v1.32.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ ssh     │ cert-options-194967 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-194967       │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ ssh     │ -p cert-options-194967 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-194967       │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ delete  │ -p cert-options-194967                                                                                                                                                                                                                              │ cert-options-194967       │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ start   │ -p stopped-upgrade-595653 --memory=3072 --vm-driver=docker  --container-runtime=containerd                                                                                                                                                          │ stopped-upgrade-595653    │ jenkins │ v1.32.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ start   │ -p missing-upgrade-231159 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                                                      │ missing-upgrade-231159    │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:43 UTC │
	│ stop    │ stopped-upgrade-595653 stop                                                                                                                                                                                                                         │ stopped-upgrade-595653    │ jenkins │ v1.32.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:42 UTC │
	│ start   │ -p stopped-upgrade-595653 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                                                      │ stopped-upgrade-595653    │ jenkins │ v1.37.0 │ 23 Nov 25 08:42 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p stopped-upgrade-595653                                                                                                                                                                                                                           │ stopped-upgrade-595653    │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ start   │ -p old-k8s-version-204346 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-204346    │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p missing-upgrade-231159                                                                                                                                                                                                                           │ missing-upgrade-231159    │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ start   │ -p no-preload-999106 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-999106         │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-204346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-204346    │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ stop    │ -p old-k8s-version-204346 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-204346    │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-204346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-204346    │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p old-k8s-version-204346 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-204346    │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:44:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:44:27.061593  268762 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:44:27.061725  268762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:44:27.061736  268762 out.go:374] Setting ErrFile to fd 2...
	I1123 08:44:27.061743  268762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:44:27.061970  268762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
	I1123 08:44:27.062415  268762 out.go:368] Setting JSON to false
	I1123 08:44:27.063602  268762 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5208,"bootTime":1763882259,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:44:27.063664  268762 start.go:143] virtualization: kvm guest
	I1123 08:44:27.065752  268762 out.go:179] * [old-k8s-version-204346] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:44:27.067097  268762 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:44:27.067146  268762 notify.go:221] Checking for updates...
	I1123 08:44:27.069552  268762 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:44:27.070843  268762 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-13876/kubeconfig
	I1123 08:44:27.072061  268762 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-13876/.minikube
	I1123 08:44:27.073173  268762 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:44:27.074191  268762 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:44:27.075690  268762 config.go:182] Loaded profile config "old-k8s-version-204346": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 08:44:27.077256  268762 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1123 08:44:27.078172  268762 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:44:27.102115  268762 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:44:27.102190  268762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:44:27.159084  268762 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 08:44:27.149465588 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:44:27.159199  268762 docker.go:319] overlay module found
	I1123 08:44:27.160913  268762 out.go:179] * Using the docker driver based on existing profile
	I1123 08:44:27.162100  268762 start.go:309] selected driver: docker
	I1123 08:44:27.162115  268762 start.go:927] validating driver "docker" against &{Name:old-k8s-version-204346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-204346 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountSt
ring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:44:27.162190  268762 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:44:27.162772  268762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:44:27.224741  268762 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 08:44:27.214852266 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:44:27.225070  268762 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:44:27.225098  268762 cni.go:84] Creating CNI manager for ""
	I1123 08:44:27.225153  268762 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:44:27.225183  268762 start.go:353] cluster config:
	{Name:old-k8s-version-204346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-204346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:44:27.227162  268762 out.go:179] * Starting "old-k8s-version-204346" primary control-plane node in "old-k8s-version-204346" cluster
	I1123 08:44:27.228355  268762 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:44:27.229438  268762 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:44:27.230484  268762 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:44:27.230511  268762 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1123 08:44:27.230515  268762 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:44:27.230524  268762 cache.go:65] Caching tarball of preloaded images
	I1123 08:44:27.230622  268762 preload.go:238] Found /home/jenkins/minikube-integration/21969-13876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1123 08:44:27.230635  268762 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1123 08:44:27.230759  268762 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/old-k8s-version-204346/config.json ...
	I1123 08:44:27.251172  268762 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:44:27.251199  268762 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:44:27.251214  268762 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:44:27.251241  268762 start.go:360] acquireMachinesLock for old-k8s-version-204346: {Name:mkc8dfec607e9f2fe653f7594782b98ccf59083b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:27.251292  268762 start.go:364] duration metric: took 35.071µs to acquireMachinesLock for "old-k8s-version-204346"
	I1123 08:44:27.251317  268762 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:44:27.251324  268762 fix.go:54] fixHost starting: 
	I1123 08:44:27.251511  268762 cli_runner.go:164] Run: docker container inspect old-k8s-version-204346 --format={{.State.Status}}
	I1123 08:44:27.269325  268762 fix.go:112] recreateIfNeeded on old-k8s-version-204346: state=Stopped err=<nil>
	W1123 08:44:27.269353  268762 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 08:44:22.366328  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:44:22.366354  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:44:22.366360  206485 cri.go:89] found id: ""
	I1123 08:44:22.366369  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:44:22.366423  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:22.370838  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:22.374686  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:44:22.374755  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:44:22.401966  206485 cri.go:89] found id: ""
	I1123 08:44:22.401994  206485 logs.go:282] 0 containers: []
	W1123 08:44:22.402002  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:44:22.402008  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:44:22.402064  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:44:22.429417  206485 cri.go:89] found id: "5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:44:22.429441  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:44:22.429447  206485 cri.go:89] found id: ""
	I1123 08:44:22.429455  206485 logs.go:282] 2 containers: [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:44:22.429519  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:22.433831  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:22.438074  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:44:22.438148  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:44:22.465104  206485 cri.go:89] found id: ""
	I1123 08:44:22.465134  206485 logs.go:282] 0 containers: []
	W1123 08:44:22.465145  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:44:22.465153  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:44:22.465209  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:44:22.491961  206485 cri.go:89] found id: ""
	I1123 08:44:22.491983  206485 logs.go:282] 0 containers: []
	W1123 08:44:22.491991  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:44:22.492000  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:44:22.492011  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:44:22.505320  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:44:22.505348  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:44:22.539731  206485 logs.go:123] Gathering logs for kube-controller-manager [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b] ...
	I1123 08:44:22.539760  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:44:22.567009  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:44:22.567043  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:44:22.597917  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:44:22.597948  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:44:22.656054  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:44:22.656075  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:44:22.656091  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:44:22.687968  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:44:22.687997  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:44:22.723374  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:44:22.723406  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:44:22.757145  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:44:22.757174  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:44:22.812852  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:44:22.812884  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:44:22.858684  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:44:22.858714  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:44:22.889916  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:44:22.889943  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:44:25.480284  206485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:44:25.480769  206485 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:44:25.480816  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:44:25.480868  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:44:25.508127  206485 cri.go:89] found id: "630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:44:25.508152  206485 cri.go:89] found id: "fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:44:25.508158  206485 cri.go:89] found id: ""
	I1123 08:44:25.508166  206485 logs.go:282] 2 containers: [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3]
	I1123 08:44:25.508211  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:25.512252  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:25.516092  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1123 08:44:25.516151  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:44:25.543107  206485 cri.go:89] found id: "044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:44:25.543126  206485 cri.go:89] found id: ""
	I1123 08:44:25.543135  206485 logs.go:282] 1 containers: [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1]
	I1123 08:44:25.543184  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:25.547277  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1123 08:44:25.547341  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:44:25.572601  206485 cri.go:89] found id: ""
	I1123 08:44:25.572623  206485 logs.go:282] 0 containers: []
	W1123 08:44:25.572631  206485 logs.go:284] No container was found matching "coredns"
	I1123 08:44:25.572636  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:44:25.572724  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:44:25.598811  206485 cri.go:89] found id: "1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	I1123 08:44:25.598835  206485 cri.go:89] found id: "c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:44:25.598841  206485 cri.go:89] found id: ""
	I1123 08:44:25.598849  206485 logs.go:282] 2 containers: [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9]
	I1123 08:44:25.598903  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:25.603116  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:25.606887  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:44:25.606951  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:44:25.634048  206485 cri.go:89] found id: ""
	I1123 08:44:25.634081  206485 logs.go:282] 0 containers: []
	W1123 08:44:25.634092  206485 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:44:25.634099  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:44:25.634159  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:44:25.660918  206485 cri.go:89] found id: "5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:44:25.660938  206485 cri.go:89] found id: "a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:44:25.660943  206485 cri.go:89] found id: ""
	I1123 08:44:25.660953  206485 logs.go:282] 2 containers: [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e]
	I1123 08:44:25.661009  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:25.665230  206485 ssh_runner.go:195] Run: which crictl
	I1123 08:44:25.669495  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1123 08:44:25.669555  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:44:25.696223  206485 cri.go:89] found id: ""
	I1123 08:44:25.696254  206485 logs.go:282] 0 containers: []
	W1123 08:44:25.696266  206485 logs.go:284] No container was found matching "kindnet"
	I1123 08:44:25.696275  206485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:44:25.696330  206485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:44:25.723010  206485 cri.go:89] found id: ""
	I1123 08:44:25.723036  206485 logs.go:282] 0 containers: []
	W1123 08:44:25.723046  206485 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:44:25.723059  206485 logs.go:123] Gathering logs for kube-scheduler [c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9] ...
	I1123 08:44:25.723074  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c82c9e1d93a5e6d0c97d2a50653e0a0e24a7d09dd9bc31f38c76b1e52ebb35f9"
	I1123 08:44:25.758369  206485 logs.go:123] Gathering logs for kube-controller-manager [5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b] ...
	I1123 08:44:25.758401  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b31aa0ae9e6f4796c821d96da5ffc1e20bc83e10fbbf63bd1b9716b861bd26b"
	I1123 08:44:25.785411  206485 logs.go:123] Gathering logs for kubelet ...
	I1123 08:44:25.785440  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:44:25.877430  206485 logs.go:123] Gathering logs for kube-apiserver [630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391] ...
	I1123 08:44:25.877466  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 630b64b5be0cb4dca7e76e587870658d617f546c982e0093315a0a29f8601391"
	I1123 08:44:25.910311  206485 logs.go:123] Gathering logs for etcd [044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1] ...
	I1123 08:44:25.910338  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 044e753c77fd5ac122f63a6399605382f8ab9de0635c6673d96f00897dd6e4e1"
	I1123 08:44:25.944171  206485 logs.go:123] Gathering logs for kube-controller-manager [a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e] ...
	I1123 08:44:25.944200  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a43b20b4a15ff89af8a76d1f03bfab0a98debb934612c0de9daeabb46141d54e"
	I1123 08:44:25.975807  206485 logs.go:123] Gathering logs for containerd ...
	I1123 08:44:25.975840  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1123 08:44:26.026059  206485 logs.go:123] Gathering logs for container status ...
	I1123 08:44:26.026095  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:44:26.057028  206485 logs.go:123] Gathering logs for dmesg ...
	I1123 08:44:26.057053  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:44:26.070438  206485 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:44:26.070465  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:44:26.127558  206485 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:44:26.127581  206485 logs.go:123] Gathering logs for kube-apiserver [fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3] ...
	I1123 08:44:26.127596  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fb8518f573158d97f53fb245d32078fea9adfe77d427a893fc59a99e978ebdb3"
	I1123 08:44:26.160547  206485 logs.go:123] Gathering logs for kube-scheduler [1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a] ...
	I1123 08:44:26.160575  206485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1cf7292c277af6cb045477d977b4eb3ac8a26073812db340c39829184d070d7a"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f14a1cb2dafd9       56cc512116c8f       8 seconds ago       Running             busybox                   0                   a02c6bdf2359c       busybox                                     default
	63405bdae4894       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   cfa3bb13bb673       storage-provisioner                         kube-system
	2cc89e7267ac7       52546a367cc9e       13 seconds ago      Running             coredns                   0                   2e5945031a228       coredns-66bc5c9577-4frmr                    kube-system
	711a6c6b3e3e1       409467f978b4a       24 seconds ago      Running             kindnet-cni               0                   56da3428753ff       kindnet-wkmxg                               kube-system
	578024ff344be       fc25172553d79       28 seconds ago      Running             kube-proxy                0                   0f04dc7d0f45d       kube-proxy-4775c                            kube-system
	a13994b985d61       c80c8dbafe7dd       37 seconds ago      Running             kube-controller-manager   0                   5c63f9a4fdb1b       kube-controller-manager-no-preload-999106   kube-system
	a4e26b13f04d9       7dd6aaa1717ab       37 seconds ago      Running             kube-scheduler            0                   1a08dd52d7ff6       kube-scheduler-no-preload-999106            kube-system
	50b37fae6e17a       c3994bc696102       37 seconds ago      Running             kube-apiserver            0                   234a3670fe47c       kube-apiserver-no-preload-999106            kube-system
	77aeaa21182e4       5f1f5298c888d       37 seconds ago      Running             etcd                      0                   1c54b1a9d7e47       etcd-no-preload-999106                      kube-system
	
	
	==> containerd <==
	Nov 23 08:44:16 no-preload-999106 containerd[667]: time="2025-11-23T08:44:16.604304240Z" level=info msg="Container 63405bdae48943f3fdfa5320602b28e1a4aa4128249e8068c3ff9f091f800ce7: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:44:16 no-preload-999106 containerd[667]: time="2025-11-23T08:44:16.608065888Z" level=info msg="CreateContainer within sandbox \"2e5945031a22840b743cd39c9cb17b6cd02819b542e1ccbe4ade44fa878d1cfa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2cc89e7267ac7630562f3769c91fe81185986fe21c675aca66b47e76b0b64159\""
	Nov 23 08:44:16 no-preload-999106 containerd[667]: time="2025-11-23T08:44:16.608626798Z" level=info msg="StartContainer for \"2cc89e7267ac7630562f3769c91fe81185986fe21c675aca66b47e76b0b64159\""
	Nov 23 08:44:16 no-preload-999106 containerd[667]: time="2025-11-23T08:44:16.609730021Z" level=info msg="connecting to shim 2cc89e7267ac7630562f3769c91fe81185986fe21c675aca66b47e76b0b64159" address="unix:///run/containerd/s/764aad2d543665de36e0fb9128b3b9bf2af86c8e6bc018d6ff79d28fedb038fe" protocol=ttrpc version=3
	Nov 23 08:44:16 no-preload-999106 containerd[667]: time="2025-11-23T08:44:16.613050918Z" level=info msg="CreateContainer within sandbox \"cfa3bb13bb673a245a2439846b845f290bcf2036baff1f4a12f4cee09a4188d2\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"63405bdae48943f3fdfa5320602b28e1a4aa4128249e8068c3ff9f091f800ce7\""
	Nov 23 08:44:16 no-preload-999106 containerd[667]: time="2025-11-23T08:44:16.613698877Z" level=info msg="StartContainer for \"63405bdae48943f3fdfa5320602b28e1a4aa4128249e8068c3ff9f091f800ce7\""
	Nov 23 08:44:16 no-preload-999106 containerd[667]: time="2025-11-23T08:44:16.614794700Z" level=info msg="connecting to shim 63405bdae48943f3fdfa5320602b28e1a4aa4128249e8068c3ff9f091f800ce7" address="unix:///run/containerd/s/a9eb7b146d3de1c9d88f4eb8d66b152b7db740ca40bae611f2b9d44cf21f906a" protocol=ttrpc version=3
	Nov 23 08:44:16 no-preload-999106 containerd[667]: time="2025-11-23T08:44:16.663903640Z" level=info msg="StartContainer for \"63405bdae48943f3fdfa5320602b28e1a4aa4128249e8068c3ff9f091f800ce7\" returns successfully"
	Nov 23 08:44:16 no-preload-999106 containerd[667]: time="2025-11-23T08:44:16.674801572Z" level=info msg="StartContainer for \"2cc89e7267ac7630562f3769c91fe81185986fe21c675aca66b47e76b0b64159\" returns successfully"
	Nov 23 08:44:19 no-preload-999106 containerd[667]: time="2025-11-23T08:44:19.908523668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f39d2d0a-c018-4ceb-a70a-4746b4cb29b7,Namespace:default,Attempt:0,}"
	Nov 23 08:44:19 no-preload-999106 containerd[667]: time="2025-11-23T08:44:19.949738728Z" level=info msg="connecting to shim a02c6bdf2359c79b2d471ca115d7b9846ea74efd46463fc1adb1b69683857285" address="unix:///run/containerd/s/e6f6ea797bcf5a0d8a997721aa800f35bbeb5dc4a189527ea46d82f6158dc06b" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:44:20 no-preload-999106 containerd[667]: time="2025-11-23T08:44:20.022770433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f39d2d0a-c018-4ceb-a70a-4746b4cb29b7,Namespace:default,Attempt:0,} returns sandbox id \"a02c6bdf2359c79b2d471ca115d7b9846ea74efd46463fc1adb1b69683857285\""
	Nov 23 08:44:20 no-preload-999106 containerd[667]: time="2025-11-23T08:44:20.024433308Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.202414719Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.203163573Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396644"
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.204120958Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.205812083Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.206181245Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.181708896s"
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.206214207Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.211334604Z" level=info msg="CreateContainer within sandbox \"a02c6bdf2359c79b2d471ca115d7b9846ea74efd46463fc1adb1b69683857285\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.219402208Z" level=info msg="Container f14a1cb2dafd9e7f055524ad5cf6df49c90aaf9227a06f57c9afdf1b85bf64a0: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.226170075Z" level=info msg="CreateContainer within sandbox \"a02c6bdf2359c79b2d471ca115d7b9846ea74efd46463fc1adb1b69683857285\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"f14a1cb2dafd9e7f055524ad5cf6df49c90aaf9227a06f57c9afdf1b85bf64a0\""
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.226867512Z" level=info msg="StartContainer for \"f14a1cb2dafd9e7f055524ad5cf6df49c90aaf9227a06f57c9afdf1b85bf64a0\""
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.227834015Z" level=info msg="connecting to shim f14a1cb2dafd9e7f055524ad5cf6df49c90aaf9227a06f57c9afdf1b85bf64a0" address="unix:///run/containerd/s/e6f6ea797bcf5a0d8a997721aa800f35bbeb5dc4a189527ea46d82f6158dc06b" protocol=ttrpc version=3
	Nov 23 08:44:22 no-preload-999106 containerd[667]: time="2025-11-23T08:44:22.287319879Z" level=info msg="StartContainer for \"f14a1cb2dafd9e7f055524ad5cf6df49c90aaf9227a06f57c9afdf1b85bf64a0\" returns successfully"
	
	
	==> coredns [2cc89e7267ac7630562f3769c91fe81185986fe21c675aca66b47e76b0b64159] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46471 - 63279 "HINFO IN 5262590171811647033.5549675669037636196. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.113582469s
	
	
	==> describe nodes <==
	Name:               no-preload-999106
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-999106
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=no-preload-999106
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_43_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:43:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-999106
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:44:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:44:26 +0000   Sun, 23 Nov 2025 08:43:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:44:26 +0000   Sun, 23 Nov 2025 08:43:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:44:26 +0000   Sun, 23 Nov 2025 08:43:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:44:26 +0000   Sun, 23 Nov 2025 08:44:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-999106
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                513fe179-27e5-4ae7-826a-4786a28960de
	  Boot ID:                    3bab2277-1db4-4284-9fcc-5d1d58e87eb4
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-4frmr                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-no-preload-999106                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-wkmxg                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-no-preload-999106             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-no-preload-999106    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-4775c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-no-preload-999106             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 34s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  34s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  34s   kubelet          Node no-preload-999106 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s   kubelet          Node no-preload-999106 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s   kubelet          Node no-preload-999106 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s   node-controller  Node no-preload-999106 event: Registered Node no-preload-999106 in Controller
	  Normal  NodeReady                14s   kubelet          Node no-preload-999106 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000998] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.395963] i8042: Warning: Keylock active
	[  +0.012075] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497035] block sda: the capability attribute has been deprecated.
	[  +0.088048] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.022581] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.308229] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [77aeaa21182e4928eee6a02a9cc9b49776b2b0847a46a97762af20e78cd0e209] <==
	{"level":"warn","ts":"2025-11-23T08:43:53.326595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.335740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.351807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.361769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.369577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.377265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.385504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.393894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.402014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.416786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.423527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.434035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.441325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.449091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.456194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.462618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.470053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.476949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.483521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.490271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.496838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.503963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.524213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.540163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:53.604152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44770","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:44:30 up  1:26,  0 user,  load average: 2.37, 2.47, 1.77
	Linux no-preload-999106 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [711a6c6b3e3e10b33da0eea19761c001ff7749f1323d476a25a86945a8958d6e] <==
	I1123 08:44:05.844554       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:44:05.844891       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:44:05.845021       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:44:05.845036       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:44:05.845055       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:44:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:44:06.045540       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:44:06.045632       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:44:06.045672       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:44:06.133122       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:44:06.446414       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:44:06.446441       1 metrics.go:72] Registering metrics
	I1123 08:44:06.446531       1 controller.go:711] "Syncing nftables rules"
	I1123 08:44:16.049759       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:44:16.049829       1 main.go:301] handling current node
	I1123 08:44:26.047791       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:44:26.047854       1 main.go:301] handling current node
	
	
	==> kube-apiserver [50b37fae6e17ae766a8008266e820179ef533d1d7d3eadca04a01f3868a49584] <==
	E1123 08:43:54.158526       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1123 08:43:54.206883       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:43:54.210795       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:43:54.210968       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:43:54.217329       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:43:54.217636       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:43:54.293120       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:43:55.008889       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:43:55.012788       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:43:55.012812       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:43:55.466543       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:43:55.501568       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:43:55.612582       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:43:55.619002       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 08:43:55.619995       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:43:55.624482       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:43:56.042573       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:43:56.391155       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:43:56.402802       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:43:56.412036       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:44:01.395890       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:01.400070       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:01.794686       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:44:01.894063       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 08:44:27.709782       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:56878: use of closed network connection
	
	
	==> kube-controller-manager [a13994b985d61282ea7193a0f310c727fce78ee0999e414f0d071cd87e65f853] <==
	I1123 08:44:01.014751       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 08:44:01.039726       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:44:01.039747       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:44:01.039755       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:44:01.040002       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 08:44:01.040785       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 08:44:01.040824       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 08:44:01.040862       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 08:44:01.040873       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 08:44:01.040896       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 08:44:01.040868       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:44:01.041022       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 08:44:01.041292       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 08:44:01.041608       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:44:01.041676       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 08:44:01.041714       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:44:01.042885       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 08:44:01.042928       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:44:01.042974       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 08:44:01.043145       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:44:01.046850       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:44:01.046860       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:44:01.046878       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:44:01.061450       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:44:20.994033       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [578024ff344be4a5fc7ab094a44f9bc79f4cccce12f99fa0d25f0324d44303b7] <==
	I1123 08:44:02.655868       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:44:02.724581       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:44:02.825497       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:44:02.825538       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 08:44:02.825630       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:44:02.850324       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:44:02.850383       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:44:02.856443       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:44:02.856917       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:44:02.856957       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:44:02.858534       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:44:02.858553       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:44:02.858584       1 config.go:200] "Starting service config controller"
	I1123 08:44:02.859142       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:44:02.858676       1 config.go:309] "Starting node config controller"
	I1123 08:44:02.859180       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:44:02.859186       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:44:02.858671       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:44:02.859194       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:44:02.959492       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:44:02.959516       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:44:02.959550       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a4e26b13f04d988d880cb437c2f2848d048b5790cf2383802d421e28a1de61fe] <==
	E1123 08:43:54.053444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:43:54.053284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:43:54.053481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:43:54.053366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:43:54.053603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:43:54.053693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:43:54.053728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:43:54.053745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:43:54.053772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:43:54.053801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:43:54.053841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:43:54.053894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:43:54.906479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:43:54.986264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:43:55.029504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:43:55.034470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:43:55.038553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:43:55.079288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:43:55.124526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:43:55.149590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:43:55.190050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:43:55.199094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:43:55.229165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:43:55.295785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1123 08:43:57.151080       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:43:57 no-preload-999106 kubelet[2177]: E1123 08:43:57.289086    2177 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-no-preload-999106\" already exists" pod="kube-system/kube-apiserver-no-preload-999106"
	Nov 23 08:43:57 no-preload-999106 kubelet[2177]: I1123 08:43:57.301546    2177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-999106" podStartSLOduration=2.301526127 podStartE2EDuration="2.301526127s" podCreationTimestamp="2025-11-23 08:43:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:43:57.301345592 +0000 UTC m=+1.134413322" watchObservedRunningTime="2025-11-23 08:43:57.301526127 +0000 UTC m=+1.134593858"
	Nov 23 08:43:57 no-preload-999106 kubelet[2177]: I1123 08:43:57.322366    2177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-999106" podStartSLOduration=1.3223489050000001 podStartE2EDuration="1.322348905s" podCreationTimestamp="2025-11-23 08:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:43:57.312159273 +0000 UTC m=+1.145227005" watchObservedRunningTime="2025-11-23 08:43:57.322348905 +0000 UTC m=+1.155416635"
	Nov 23 08:43:57 no-preload-999106 kubelet[2177]: I1123 08:43:57.332336    2177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-999106" podStartSLOduration=1.332314707 podStartE2EDuration="1.332314707s" podCreationTimestamp="2025-11-23 08:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:43:57.322465267 +0000 UTC m=+1.155532993" watchObservedRunningTime="2025-11-23 08:43:57.332314707 +0000 UTC m=+1.165382432"
	Nov 23 08:43:57 no-preload-999106 kubelet[2177]: I1123 08:43:57.332456    2177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-999106" podStartSLOduration=1.332451238 podStartE2EDuration="1.332451238s" podCreationTimestamp="2025-11-23 08:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:43:57.332238714 +0000 UTC m=+1.165306446" watchObservedRunningTime="2025-11-23 08:43:57.332451238 +0000 UTC m=+1.165518968"
	Nov 23 08:44:01 no-preload-999106 kubelet[2177]: I1123 08:44:01.106211    2177 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:44:01 no-preload-999106 kubelet[2177]: I1123 08:44:01.106883    2177 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:44:01 no-preload-999106 kubelet[2177]: I1123 08:44:01.979903    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8a4de139-3851-46a4-b057-5e61880bd43f-kube-proxy\") pod \"kube-proxy-4775c\" (UID: \"8a4de139-3851-46a4-b057-5e61880bd43f\") " pod="kube-system/kube-proxy-4775c"
	Nov 23 08:44:01 no-preload-999106 kubelet[2177]: I1123 08:44:01.979959    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb8pv\" (UniqueName: \"kubernetes.io/projected/f7f0591c-04e2-4301-b210-21fd2cfa2614-kube-api-access-xb8pv\") pod \"kindnet-wkmxg\" (UID: \"f7f0591c-04e2-4301-b210-21fd2cfa2614\") " pod="kube-system/kindnet-wkmxg"
	Nov 23 08:44:01 no-preload-999106 kubelet[2177]: I1123 08:44:01.979979    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a4de139-3851-46a4-b057-5e61880bd43f-xtables-lock\") pod \"kube-proxy-4775c\" (UID: \"8a4de139-3851-46a4-b057-5e61880bd43f\") " pod="kube-system/kube-proxy-4775c"
	Nov 23 08:44:01 no-preload-999106 kubelet[2177]: I1123 08:44:01.979998    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdlnb\" (UniqueName: \"kubernetes.io/projected/8a4de139-3851-46a4-b057-5e61880bd43f-kube-api-access-gdlnb\") pod \"kube-proxy-4775c\" (UID: \"8a4de139-3851-46a4-b057-5e61880bd43f\") " pod="kube-system/kube-proxy-4775c"
	Nov 23 08:44:01 no-preload-999106 kubelet[2177]: I1123 08:44:01.980012    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7f0591c-04e2-4301-b210-21fd2cfa2614-xtables-lock\") pod \"kindnet-wkmxg\" (UID: \"f7f0591c-04e2-4301-b210-21fd2cfa2614\") " pod="kube-system/kindnet-wkmxg"
	Nov 23 08:44:01 no-preload-999106 kubelet[2177]: I1123 08:44:01.980027    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7f0591c-04e2-4301-b210-21fd2cfa2614-lib-modules\") pod \"kindnet-wkmxg\" (UID: \"f7f0591c-04e2-4301-b210-21fd2cfa2614\") " pod="kube-system/kindnet-wkmxg"
	Nov 23 08:44:01 no-preload-999106 kubelet[2177]: I1123 08:44:01.980076    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f7f0591c-04e2-4301-b210-21fd2cfa2614-cni-cfg\") pod \"kindnet-wkmxg\" (UID: \"f7f0591c-04e2-4301-b210-21fd2cfa2614\") " pod="kube-system/kindnet-wkmxg"
	Nov 23 08:44:01 no-preload-999106 kubelet[2177]: I1123 08:44:01.980151    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a4de139-3851-46a4-b057-5e61880bd43f-lib-modules\") pod \"kube-proxy-4775c\" (UID: \"8a4de139-3851-46a4-b057-5e61880bd43f\") " pod="kube-system/kube-proxy-4775c"
	Nov 23 08:44:03 no-preload-999106 kubelet[2177]: I1123 08:44:03.445710    2177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4775c" podStartSLOduration=2.445677445 podStartE2EDuration="2.445677445s" podCreationTimestamp="2025-11-23 08:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:03.304148929 +0000 UTC m=+7.137216658" watchObservedRunningTime="2025-11-23 08:44:03.445677445 +0000 UTC m=+7.278745175"
	Nov 23 08:44:06 no-preload-999106 kubelet[2177]: I1123 08:44:06.313121    2177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-wkmxg" podStartSLOduration=2.4096422459999998 podStartE2EDuration="5.313102245s" podCreationTimestamp="2025-11-23 08:44:01 +0000 UTC" firstStartedPulling="2025-11-23 08:44:02.602852046 +0000 UTC m=+6.435919770" lastFinishedPulling="2025-11-23 08:44:05.506312043 +0000 UTC m=+9.339379769" observedRunningTime="2025-11-23 08:44:06.313099577 +0000 UTC m=+10.146167307" watchObservedRunningTime="2025-11-23 08:44:06.313102245 +0000 UTC m=+10.146169974"
	Nov 23 08:44:16 no-preload-999106 kubelet[2177]: I1123 08:44:16.125364    2177 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:44:16 no-preload-999106 kubelet[2177]: I1123 08:44:16.281861    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b2cm\" (UniqueName: \"kubernetes.io/projected/9305ab6d-7709-40d0-a8a3-d64dda164119-kube-api-access-5b2cm\") pod \"coredns-66bc5c9577-4frmr\" (UID: \"9305ab6d-7709-40d0-a8a3-d64dda164119\") " pod="kube-system/coredns-66bc5c9577-4frmr"
	Nov 23 08:44:16 no-preload-999106 kubelet[2177]: I1123 08:44:16.281935    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/328b82cc-8bf7-46bc-bab9-f254c0716802-tmp\") pod \"storage-provisioner\" (UID: \"328b82cc-8bf7-46bc-bab9-f254c0716802\") " pod="kube-system/storage-provisioner"
	Nov 23 08:44:16 no-preload-999106 kubelet[2177]: I1123 08:44:16.281963    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96kvf\" (UniqueName: \"kubernetes.io/projected/328b82cc-8bf7-46bc-bab9-f254c0716802-kube-api-access-96kvf\") pod \"storage-provisioner\" (UID: \"328b82cc-8bf7-46bc-bab9-f254c0716802\") " pod="kube-system/storage-provisioner"
	Nov 23 08:44:16 no-preload-999106 kubelet[2177]: I1123 08:44:16.281986    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9305ab6d-7709-40d0-a8a3-d64dda164119-config-volume\") pod \"coredns-66bc5c9577-4frmr\" (UID: \"9305ab6d-7709-40d0-a8a3-d64dda164119\") " pod="kube-system/coredns-66bc5c9577-4frmr"
	Nov 23 08:44:17 no-preload-999106 kubelet[2177]: I1123 08:44:17.338035    2177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.338015855 podStartE2EDuration="15.338015855s" podCreationTimestamp="2025-11-23 08:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:17.33797982 +0000 UTC m=+21.171047550" watchObservedRunningTime="2025-11-23 08:44:17.338015855 +0000 UTC m=+21.171083585"
	Nov 23 08:44:17 no-preload-999106 kubelet[2177]: I1123 08:44:17.349102    2177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4frmr" podStartSLOduration=15.349084852 podStartE2EDuration="15.349084852s" podCreationTimestamp="2025-11-23 08:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:17.348846652 +0000 UTC m=+21.181914381" watchObservedRunningTime="2025-11-23 08:44:17.349084852 +0000 UTC m=+21.182152584"
	Nov 23 08:44:19 no-preload-999106 kubelet[2177]: I1123 08:44:19.703995    2177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkk28\" (UniqueName: \"kubernetes.io/projected/f39d2d0a-c018-4ceb-a70a-4746b4cb29b7-kube-api-access-fkk28\") pod \"busybox\" (UID: \"f39d2d0a-c018-4ceb-a70a-4746b4cb29b7\") " pod="default/busybox"
	
	
	==> storage-provisioner [63405bdae48943f3fdfa5320602b28e1a4aa4128249e8068c3ff9f091f800ce7] <==
	I1123 08:44:16.674767       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:44:16.683745       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:44:16.683801       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:44:16.686626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:16.692338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:44:16.692589       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:44:16.692807       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-999106_d8c422c1-340f-45cf-95c6-b29dd02d7ad7!
	I1123 08:44:16.692739       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"68a4bd73-1962-40e8-b78c-3faa8bee8f62", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-999106_d8c422c1-340f-45cf-95c6-b29dd02d7ad7 became leader
	W1123 08:44:16.695144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:16.699472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:44:16.793694       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-999106_d8c422c1-340f-45cf-95c6-b29dd02d7ad7!
	W1123 08:44:18.703063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:18.707134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:20.710228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:20.715772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:22.719315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:22.723039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:24.726425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:24.731256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:26.735101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:26.740812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:28.744472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:28.748585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:30.751921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:30.755544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-999106 -n no-preload-999106
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-999106 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (11.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (15.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-319770 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e1165604-fe4b-4b63-a3e2-5378a2836868] Pending
helpers_test.go:352: "busybox" [e1165604-fe4b-4b63-a3e2-5378a2836868] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e1165604-fe4b-4b63-a3e2-5378a2836868] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00483226s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-319770 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-319770
helpers_test.go:243: (dbg) docker inspect embed-certs-319770:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dbe19d3629a912a1e1b33eb3c619d1ae0e29726e3f5d743a66def00eab7afe68",
	        "Created": "2025-11-23T08:45:16.059734305Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 282738,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:45:16.110806257Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/dbe19d3629a912a1e1b33eb3c619d1ae0e29726e3f5d743a66def00eab7afe68/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dbe19d3629a912a1e1b33eb3c619d1ae0e29726e3f5d743a66def00eab7afe68/hostname",
	        "HostsPath": "/var/lib/docker/containers/dbe19d3629a912a1e1b33eb3c619d1ae0e29726e3f5d743a66def00eab7afe68/hosts",
	        "LogPath": "/var/lib/docker/containers/dbe19d3629a912a1e1b33eb3c619d1ae0e29726e3f5d743a66def00eab7afe68/dbe19d3629a912a1e1b33eb3c619d1ae0e29726e3f5d743a66def00eab7afe68-json.log",
	        "Name": "/embed-certs-319770",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-319770:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-319770",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dbe19d3629a912a1e1b33eb3c619d1ae0e29726e3f5d743a66def00eab7afe68",
	                "LowerDir": "/var/lib/docker/overlay2/0f787ff3b62a869d8cae9841b2bb9054d9f115324aace42731b35b65551dc576-init/diff:/var/lib/docker/overlay2/ee04ca8b85d0dedeb02bd9a5189a59a7f53ca89a011d262a78df32fa43bf0598/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0f787ff3b62a869d8cae9841b2bb9054d9f115324aace42731b35b65551dc576/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0f787ff3b62a869d8cae9841b2bb9054d9f115324aace42731b35b65551dc576/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0f787ff3b62a869d8cae9841b2bb9054d9f115324aace42731b35b65551dc576/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-319770",
	                "Source": "/var/lib/docker/volumes/embed-certs-319770/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-319770",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-319770",
	                "name.minikube.sigs.k8s.io": "embed-certs-319770",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6520d3581e59e1fb727b37d1efa4a5f233b63ab2d98f871884b24b7080b39293",
	            "SandboxKey": "/var/run/docker/netns/6520d3581e59",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-319770": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "547544c725d7f45f6b42e2321527d80cabd824b4ef4a7493d17401e268681439",
	                    "EndpointID": "621dee2d113407623ca2acd7f8ef9d8206359ab810192fcfcc9b9dfb7ed05d51",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "9e:4b:59:9a:42:c4",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-319770",
	                        "dbe19d3629a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-319770 -n embed-certs-319770
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-319770 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-319770 logs -n 25: (1.431390576s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p no-preload-999106 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p kubernetes-upgrade-776670 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-776670    │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ start   │ -p kubernetes-upgrade-776670 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-776670    │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p cert-expiration-680868 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-680868       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p kubernetes-upgrade-776670                                                                                                                                                                                                                        │ kubernetes-upgrade-776670    │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p embed-certs-319770 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-319770           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ delete  │ -p cert-expiration-680868                                                                                                                                                                                                                           │ cert-expiration-680868       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p disable-driver-mounts-445958                                                                                                                                                                                                                     │ disable-driver-mounts-445958 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p default-k8s-diff-port-525009 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-525009 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ old-k8s-version-204346 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-204346       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p old-k8s-version-204346 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-204346       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ unpause │ -p old-k8s-version-204346 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-204346       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p old-k8s-version-204346                                                                                                                                                                                                                           │ old-k8s-version-204346       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p old-k8s-version-204346                                                                                                                                                                                                                           │ old-k8s-version-204346       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p newest-cni-399335 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-399335            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ no-preload-999106 image list --format=json                                                                                                                                                                                                          │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p no-preload-999106 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ unpause │ -p no-preload-999106 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p no-preload-999106                                                                                                                                                                                                                                │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p no-preload-999106                                                                                                                                                                                                                                │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p auto-794429 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-794429                  │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-399335 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-399335            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:46 UTC │
	│ stop    │ -p newest-cni-399335 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-399335            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ addons  │ enable dashboard -p newest-cni-399335 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-399335            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ start   │ -p newest-cni-399335 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-399335            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:46:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:46:02.262862  297115 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:46:02.263457  297115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:46:02.263472  297115 out.go:374] Setting ErrFile to fd 2...
	I1123 08:46:02.263479  297115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:46:02.263959  297115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
	I1123 08:46:02.265014  297115 out.go:368] Setting JSON to false
	I1123 08:46:02.266198  297115 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5303,"bootTime":1763882259,"procs":311,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:46:02.266288  297115 start.go:143] virtualization: kvm guest
	I1123 08:46:02.268238  297115 out.go:179] * [newest-cni-399335] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:46:02.270020  297115 notify.go:221] Checking for updates...
	I1123 08:46:02.270024  297115 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:46:02.271482  297115 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:46:02.272843  297115 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-13876/kubeconfig
	I1123 08:46:02.274014  297115 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-13876/.minikube
	I1123 08:46:02.275227  297115 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:46:02.276361  297115 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:46:02.278076  297115 config.go:182] Loaded profile config "newest-cni-399335": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:46:02.278849  297115 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:46:02.305981  297115 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:46:02.306077  297115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:46:02.369456  297115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-23 08:46:02.357744797 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:46:02.369605  297115 docker.go:319] overlay module found
	I1123 08:46:02.371588  297115 out.go:179] * Using the docker driver based on existing profile
	I1123 08:46:02.372889  297115 start.go:309] selected driver: docker
	I1123 08:46:02.372908  297115 start.go:927] validating driver "docker" against &{Name:newest-cni-399335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-399335 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequeste
d:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:46:02.373024  297115 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:46:02.373690  297115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:46:02.434152  297115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-23 08:46:02.423470428 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:46:02.434445  297115 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 08:46:02.434482  297115 cni.go:84] Creating CNI manager for ""
	I1123 08:46:02.434550  297115 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:46:02.434584  297115 start.go:353] cluster config:
	{Name:newest-cni-399335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-399335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:46:02.437216  297115 out.go:179] * Starting "newest-cni-399335" primary control-plane node in "newest-cni-399335" cluster
	I1123 08:46:02.438363  297115 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:46:02.439542  297115 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:46:02.440662  297115 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:46:02.440696  297115 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:46:02.440705  297115 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1123 08:46:02.440721  297115 cache.go:65] Caching tarball of preloaded images
	I1123 08:46:02.440861  297115 preload.go:238] Found /home/jenkins/minikube-integration/21969-13876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1123 08:46:02.440884  297115 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 08:46:02.440996  297115 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335/config.json ...
	I1123 08:46:02.462167  297115 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:46:02.462192  297115 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:46:02.462213  297115 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:46:02.462248  297115 start.go:360] acquireMachinesLock for newest-cni-399335: {Name:mka68fc1b11056460ac5dd4946687e6696340967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:46:02.462317  297115 start.go:364] duration metric: took 44.173µs to acquireMachinesLock for "newest-cni-399335"
	I1123 08:46:02.462339  297115 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:46:02.462349  297115 fix.go:54] fixHost starting: 
	I1123 08:46:02.462592  297115 cli_runner.go:164] Run: docker container inspect newest-cni-399335 --format={{.State.Status}}
	I1123 08:46:02.480611  297115 fix.go:112] recreateIfNeeded on newest-cni-399335: state=Stopped err=<nil>
	W1123 08:46:02.480640  297115 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 08:46:02.037790  293483 out.go:252]   - Generating certificates and keys ...
	I1123 08:46:02.037896  293483 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:46:02.037981  293483 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:46:02.456059  293483 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:46:02.650760  293483 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:46:02.892889  293483 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:46:03.433697  293483 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:46:03.596148  293483 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:46:03.596284  293483 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-794429 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:46:03.904760  293483 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:46:03.904904  293483 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-794429 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:46:04.138573  293483 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:46:04.371416  293483 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:46:04.533631  293483 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:46:04.533727  293483 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:46:05.059932  293483 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:46:05.296891  293483 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:46:05.532157  293483 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:46:05.911922  293483 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:46:06.189126  293483 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:46:06.190020  293483 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:46:06.206499  293483 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:46:06.209148  293483 out.go:252]   - Booting up control plane ...
	I1123 08:46:06.209257  293483 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:46:06.209349  293483 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:46:06.209433  293483 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:46:06.223747  293483 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:46:06.223880  293483 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:46:06.230267  293483 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:46:06.230625  293483 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:46:06.230707  293483 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:46:06.333353  293483 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:46:06.333489  293483 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:46:02.482405  297115 out.go:252] * Restarting existing docker container for "newest-cni-399335" ...
	I1123 08:46:02.482477  297115 cli_runner.go:164] Run: docker start newest-cni-399335
	I1123 08:46:02.785631  297115 cli_runner.go:164] Run: docker container inspect newest-cni-399335 --format={{.State.Status}}
	I1123 08:46:02.807142  297115 kic.go:430] container "newest-cni-399335" state is running.
	I1123 08:46:02.807612  297115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-399335
	I1123 08:46:02.827013  297115 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335/config.json ...
	I1123 08:46:02.827313  297115 machine.go:94] provisionDockerMachine start ...
	I1123 08:46:02.827393  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:02.848474  297115 main.go:143] libmachine: Using SSH client type: native
	I1123 08:46:02.848851  297115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 08:46:02.848869  297115 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:46:02.849609  297115 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48164->127.0.0.1:33098: read: connection reset by peer
	I1123 08:46:05.993595  297115 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-399335
	
	I1123 08:46:05.993630  297115 ubuntu.go:182] provisioning hostname "newest-cni-399335"
	I1123 08:46:05.993706  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.012745  297115 main.go:143] libmachine: Using SSH client type: native
	I1123 08:46:06.012960  297115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 08:46:06.012974  297115 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-399335 && echo "newest-cni-399335" | sudo tee /etc/hostname
	I1123 08:46:06.167781  297115 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-399335
	
	I1123 08:46:06.167881  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.188339  297115 main.go:143] libmachine: Using SSH client type: native
	I1123 08:46:06.188686  297115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 08:46:06.188719  297115 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-399335' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-399335/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-399335' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:46:06.342749  297115 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:46:06.342777  297115 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-13876/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-13876/.minikube}
	I1123 08:46:06.342822  297115 ubuntu.go:190] setting up certificates
	I1123 08:46:06.342839  297115 provision.go:84] configureAuth start
	I1123 08:46:06.342903  297115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-399335
	I1123 08:46:06.364340  297115 provision.go:143] copyHostCerts
	I1123 08:46:06.364416  297115 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-13876/.minikube/ca.pem, removing ...
	I1123 08:46:06.364431  297115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-13876/.minikube/ca.pem
	I1123 08:46:06.364526  297115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-13876/.minikube/ca.pem (1078 bytes)
	I1123 08:46:06.364669  297115 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-13876/.minikube/cert.pem, removing ...
	I1123 08:46:06.364683  297115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-13876/.minikube/cert.pem
	I1123 08:46:06.364724  297115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-13876/.minikube/cert.pem (1123 bytes)
	I1123 08:46:06.364792  297115 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-13876/.minikube/key.pem, removing ...
	I1123 08:46:06.364799  297115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-13876/.minikube/key.pem
	I1123 08:46:06.364823  297115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-13876/.minikube/key.pem (1675 bytes)
	I1123 08:46:06.364877  297115 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-13876/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca-key.pem org=jenkins.newest-cni-399335 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-399335]
	I1123 08:46:06.479812  297115 provision.go:177] copyRemoteCerts
	I1123 08:46:06.479870  297115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:46:06.479911  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.500499  297115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/newest-cni-399335/id_rsa Username:docker}
	I1123 08:46:06.603344  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:46:06.621631  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 08:46:06.640892  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:46:06.659451  297115 provision.go:87] duration metric: took 316.596054ms to configureAuth
	I1123 08:46:06.659481  297115 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:46:06.659806  297115 config.go:182] Loaded profile config "newest-cni-399335": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:46:06.659823  297115 machine.go:97] duration metric: took 3.832490175s to provisionDockerMachine
	I1123 08:46:06.659835  297115 start.go:293] postStartSetup for "newest-cni-399335" (driver="docker")
	I1123 08:46:06.659849  297115 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:46:06.659904  297115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:46:06.659946  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.678221  297115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/newest-cni-399335/id_rsa Username:docker}
	I1123 08:46:06.780370  297115 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:46:06.783936  297115 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:46:06.783965  297115 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:46:06.783976  297115 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-13876/.minikube/addons for local assets ...
	I1123 08:46:06.784034  297115 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-13876/.minikube/files for local assets ...
	I1123 08:46:06.784128  297115 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-13876/.minikube/files/etc/ssl/certs/174422.pem -> 174422.pem in /etc/ssl/certs
	I1123 08:46:06.784237  297115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:46:06.791552  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/files/etc/ssl/certs/174422.pem --> /etc/ssl/certs/174422.pem (1708 bytes)
	I1123 08:46:06.809068  297115 start.go:296] duration metric: took 149.216822ms for postStartSetup
	I1123 08:46:06.809157  297115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:46:06.809195  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.829536  297115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/newest-cni-399335/id_rsa Username:docker}
	I1123 08:46:06.933880  297115 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:46:06.938359  297115 fix.go:56] duration metric: took 4.476004793s for fixHost
	I1123 08:46:06.938381  297115 start.go:83] releasing machines lock for "newest-cni-399335", held for 4.476053793s
	I1123 08:46:06.938445  297115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-399335
	I1123 08:46:06.957272  297115 ssh_runner.go:195] Run: cat /version.json
	I1123 08:46:06.957329  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.957376  297115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:46:06.957477  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.979733  297115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/newest-cni-399335/id_rsa Username:docker}
	I1123 08:46:06.981876  297115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/newest-cni-399335/id_rsa Username:docker}
	I1123 08:46:07.156878  297115 ssh_runner.go:195] Run: systemctl --version
	I1123 08:46:07.164235  297115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:46:07.169524  297115 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:46:07.169588  297115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:46:07.180131  297115 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 08:46:07.180160  297115 start.go:496] detecting cgroup driver to use...
	I1123 08:46:07.180197  297115 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:46:07.180249  297115 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:46:07.202860  297115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:46:07.219930  297115 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:46:07.219994  297115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:46:07.238447  297115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:46:07.254293  297115 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	106a6e4800152       56cc512116c8f       8 seconds ago       Running             busybox                   0                   f6fbb8317ccfb       busybox                                      default
	51abe1f942581       52546a367cc9e       14 seconds ago      Running             coredns                   0                   034a1589689fc       coredns-66bc5c9577-7h498                     kube-system
	01edc02abad60       6e38f40d628db       14 seconds ago      Running             storage-provisioner       0                   b24815c1e9982       storage-provisioner                          kube-system
	4454d77969b6b       409467f978b4a       25 seconds ago      Running             kindnet-cni               0                   3786a868a3941       kindnet-vp4s9                                kube-system
	5f7d35ec59fcf       fc25172553d79       26 seconds ago      Running             kube-proxy                0                   0e72adff43b1c       kube-proxy-h9zbj                             kube-system
	4a05f14d7bdd8       c3994bc696102       38 seconds ago      Running             kube-apiserver            0                   93dac5c113aee       kube-apiserver-embed-certs-319770            kube-system
	4bae937f30535       5f1f5298c888d       38 seconds ago      Running             etcd                      0                   8dfea39f8b9da       etcd-embed-certs-319770                      kube-system
	6268b5880694a       7dd6aaa1717ab       38 seconds ago      Running             kube-scheduler            0                   91fc9e3cc49c8       kube-scheduler-embed-certs-319770            kube-system
	30e28d8cdad13       c80c8dbafe7dd       38 seconds ago      Running             kube-controller-manager   0                   c888f01c108cd       kube-controller-manager-embed-certs-319770   kube-system
	
	
	==> containerd <==
	Nov 23 08:45:53 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:53.705329745Z" level=info msg="Container 51abe1f942581ee57b1c60963033d60d4c7fbd657eeeb135280e661659b3d299: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:45:53 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:53.706237623Z" level=info msg="CreateContainer within sandbox \"b24815c1e9982562ecd862c1e928eac7072612bcc07ceebb0b9dca5cac1e2555\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"01edc02abad606a19c85fea8936232faabe985b45747a6aafb50e2f775b8c9c5\""
	Nov 23 08:45:53 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:53.708328309Z" level=info msg="StartContainer for \"01edc02abad606a19c85fea8936232faabe985b45747a6aafb50e2f775b8c9c5\""
	Nov 23 08:45:53 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:53.709417987Z" level=info msg="connecting to shim 01edc02abad606a19c85fea8936232faabe985b45747a6aafb50e2f775b8c9c5" address="unix:///run/containerd/s/f2fe3bf97b2579055220009f5355e1c22f8a0d9c15242c65a8294b5aa8e9f1c8" protocol=ttrpc version=3
	Nov 23 08:45:53 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:53.713552682Z" level=info msg="CreateContainer within sandbox \"034a1589689fc8db7530156d8ffc74cd60bf114449aaba91b90f2ae10780b296\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"51abe1f942581ee57b1c60963033d60d4c7fbd657eeeb135280e661659b3d299\""
	Nov 23 08:45:53 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:53.714234759Z" level=info msg="StartContainer for \"51abe1f942581ee57b1c60963033d60d4c7fbd657eeeb135280e661659b3d299\""
	Nov 23 08:45:53 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:53.715275692Z" level=info msg="connecting to shim 51abe1f942581ee57b1c60963033d60d4c7fbd657eeeb135280e661659b3d299" address="unix:///run/containerd/s/d91b7efa740f749a8dde8dce2e284678361e2e8ba3fc223ea487ab4c30d0babf" protocol=ttrpc version=3
	Nov 23 08:45:53 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:53.785486139Z" level=info msg="StartContainer for \"51abe1f942581ee57b1c60963033d60d4c7fbd657eeeb135280e661659b3d299\" returns successfully"
	Nov 23 08:45:53 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:53.796909387Z" level=info msg="StartContainer for \"01edc02abad606a19c85fea8936232faabe985b45747a6aafb50e2f775b8c9c5\" returns successfully"
	Nov 23 08:45:57 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:57.463534875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:e1165604-fe4b-4b63-a3e2-5378a2836868,Namespace:default,Attempt:0,}"
	Nov 23 08:45:57 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:57.511715997Z" level=info msg="connecting to shim f6fbb8317ccfb036746b1085a92720f6a25bd45d524149e42ef5dba483c9a70a" address="unix:///run/containerd/s/f2fe4ac49a8c960992f3918e6d5932cb35deb80a2c61d776f55aed16bfef660d" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:45:57 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:57.588382305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:e1165604-fe4b-4b63-a3e2-5378a2836868,Namespace:default,Attempt:0,} returns sandbox id \"f6fbb8317ccfb036746b1085a92720f6a25bd45d524149e42ef5dba483c9a70a\""
	Nov 23 08:45:57 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:57.591606421Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.233479333Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.234048382Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396642"
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.235177264Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.237343144Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.237975503Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.64630045s"
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.238023376Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.243391192Z" level=info msg="CreateContainer within sandbox \"f6fbb8317ccfb036746b1085a92720f6a25bd45d524149e42ef5dba483c9a70a\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.252349024Z" level=info msg="Container 106a6e48001520688d64f24e7e505034d5fb1df78656f963c3cf59a72b8bfbe0: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.259619539Z" level=info msg="CreateContainer within sandbox \"f6fbb8317ccfb036746b1085a92720f6a25bd45d524149e42ef5dba483c9a70a\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"106a6e48001520688d64f24e7e505034d5fb1df78656f963c3cf59a72b8bfbe0\""
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.260471267Z" level=info msg="StartContainer for \"106a6e48001520688d64f24e7e505034d5fb1df78656f963c3cf59a72b8bfbe0\""
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.261448400Z" level=info msg="connecting to shim 106a6e48001520688d64f24e7e505034d5fb1df78656f963c3cf59a72b8bfbe0" address="unix:///run/containerd/s/f2fe4ac49a8c960992f3918e6d5932cb35deb80a2c61d776f55aed16bfef660d" protocol=ttrpc version=3
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.323690233Z" level=info msg="StartContainer for \"106a6e48001520688d64f24e7e505034d5fb1df78656f963c3cf59a72b8bfbe0\" returns successfully"
	
	
	==> coredns [51abe1f942581ee57b1c60963033d60d4c7fbd657eeeb135280e661659b3d299] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36834 - 46084 "HINFO IN 8511113068449997383.2530350933837730691. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020758373s
	
	
	==> describe nodes <==
	Name:               embed-certs-319770
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-319770
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=embed-certs-319770
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_45_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:45:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-319770
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:46:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:46:06 +0000   Sun, 23 Nov 2025 08:45:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:46:06 +0000   Sun, 23 Nov 2025 08:45:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:46:06 +0000   Sun, 23 Nov 2025 08:45:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:46:06 +0000   Sun, 23 Nov 2025 08:45:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-319770
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                063f1133-c7d9-4c9a-97e7-c82016e59ce8
	  Boot ID:                    3bab2277-1db4-4284-9fcc-5d1d58e87eb4
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-7h498                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-embed-certs-319770                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-vp4s9                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-embed-certs-319770             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-embed-certs-319770    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-h9zbj                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-embed-certs-319770             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  NodeAllocatableEnforced  40s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  39s (x8 over 40s)  kubelet          Node embed-certs-319770 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s (x8 over 40s)  kubelet          Node embed-certs-319770 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s (x7 over 40s)  kubelet          Node embed-certs-319770 status is now: NodeHasSufficientPID
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  32s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  32s                kubelet          Node embed-certs-319770 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s                kubelet          Node embed-certs-319770 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s                kubelet          Node embed-certs-319770 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node embed-certs-319770 event: Registered Node embed-certs-319770 in Controller
	  Normal  NodeReady                15s                kubelet          Node embed-certs-319770 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000998] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.395963] i8042: Warning: Keylock active
	[  +0.012075] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497035] block sda: the capability attribute has been deprecated.
	[  +0.088048] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.022581] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.308229] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [4bae937f30535bc38b9817b68491148a8d9341a43922d73fb1b88ee85c0ddd1e] <==
	{"level":"warn","ts":"2025-11-23T08:45:34.334534Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"209.297274ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-view\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T08:45:34.334610Z","caller":"traceutil/trace.go:172","msg":"trace[301407842] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-view; range_end:; response_count:0; response_revision:64; }","duration":"209.387963ms","start":"2025-11-23T08:45:34.125204Z","end":"2025-11-23T08:45:34.334592Z","steps":["trace[301407842] 'agreement among raft nodes before linearized reading'  (duration: 125.198354ms)","trace[301407842] 'range keys from in-memory index tree'  (duration: 84.061391ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:45:34.334834Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"208.762314ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/embed-certs-319770.187a965cb136ec06\" limit:1 ","response":"range_response_count:1 size:724"}
	{"level":"info","ts":"2025-11-23T08:45:34.334880Z","caller":"traceutil/trace.go:172","msg":"trace[1606856148] range","detail":"{range_begin:/registry/events/default/embed-certs-319770.187a965cb136ec06; range_end:; response_count:1; response_revision:66; }","duration":"208.815497ms","start":"2025-11-23T08:45:34.126053Z","end":"2025-11-23T08:45:34.334868Z","steps":["trace[1606856148] 'agreement among raft nodes before linearized reading'  (duration: 208.676055ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:34.334987Z","caller":"traceutil/trace.go:172","msg":"trace[377839492] transaction","detail":"{read_only:false; response_revision:65; number_of_response:1; }","duration":"231.500525ms","start":"2025-11-23T08:45:34.103471Z","end":"2025-11-23T08:45:34.334971Z","steps":["trace[377839492] 'process raft request'  (duration: 146.853059ms)","trace[377839492] 'compare'  (duration: 84.046912ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:34.335027Z","caller":"traceutil/trace.go:172","msg":"trace[1486139997] transaction","detail":"{read_only:false; response_revision:66; number_of_response:1; }","duration":"229.948829ms","start":"2025-11-23T08:45:34.105066Z","end":"2025-11-23T08:45:34.335015Z","steps":["trace[1486139997] 'process raft request'  (duration: 229.565158ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:34.499013Z","caller":"traceutil/trace.go:172","msg":"trace[1761723829] linearizableReadLoop","detail":"{readStateIndex:75; appliedIndex:75; }","duration":"101.655781ms","start":"2025-11-23T08:45:34.397330Z","end":"2025-11-23T08:45:34.498986Z","steps":["trace[1761723829] 'read index received'  (duration: 101.641841ms)","trace[1761723829] 'applied index is now lower than readState.Index'  (duration: 12.417µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:45:34.562277Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"164.925842ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T08:45:34.562354Z","caller":"traceutil/trace.go:172","msg":"trace[1692685865] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:70; }","duration":"165.013903ms","start":"2025-11-23T08:45:34.397320Z","end":"2025-11-23T08:45:34.562334Z","steps":["trace[1692685865] 'agreement among raft nodes before linearized reading'  (duration: 101.753542ms)","trace[1692685865] 'range keys from in-memory index tree'  (duration: 63.117384ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:34.562487Z","caller":"traceutil/trace.go:172","msg":"trace[160631960] transaction","detail":"{read_only:false; response_revision:74; number_of_response:1; }","duration":"210.517761ms","start":"2025-11-23T08:45:34.351957Z","end":"2025-11-23T08:45:34.562475Z","steps":["trace[160631960] 'process raft request'  (duration: 210.480127ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:34.562481Z","caller":"traceutil/trace.go:172","msg":"trace[1098312375] transaction","detail":"{read_only:false; response_revision:71; number_of_response:1; }","duration":"216.281446ms","start":"2025-11-23T08:45:34.346171Z","end":"2025-11-23T08:45:34.562453Z","steps":["trace[1098312375] 'process raft request'  (duration: 152.869116ms)","trace[1098312375] 'compare'  (duration: 63.163932ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:34.562751Z","caller":"traceutil/trace.go:172","msg":"trace[398310118] transaction","detail":"{read_only:false; response_revision:73; number_of_response:1; }","duration":"211.571953ms","start":"2025-11-23T08:45:34.351164Z","end":"2025-11-23T08:45:34.562735Z","steps":["trace[398310118] 'process raft request'  (duration: 211.242251ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:34.562757Z","caller":"traceutil/trace.go:172","msg":"trace[131933018] transaction","detail":"{read_only:false; response_revision:72; number_of_response:1; }","duration":"215.818014ms","start":"2025-11-23T08:45:34.346926Z","end":"2025-11-23T08:45:34.562744Z","steps":["trace[131933018] 'process raft request'  (duration: 215.418776ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:45:55.690057Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.602958ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T08:45:55.690160Z","caller":"traceutil/trace.go:172","msg":"trace[1245239545] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:459; }","duration":"151.713356ms","start":"2025-11-23T08:45:55.538418Z","end":"2025-11-23T08:45:55.690132Z","steps":["trace[1245239545] 'agreement among raft nodes before linearized reading'  (duration: 46.210767ms)","trace[1245239545] 'range keys from in-memory index tree'  (duration: 105.341272ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:45:55.690157Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.383202ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356836321523809 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.187a9662dd577ca2\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.187a9662dd577ca2\" value_size:606 lease:6414984799466747063 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T08:45:55.690244Z","caller":"traceutil/trace.go:172","msg":"trace[1528482515] transaction","detail":"{read_only:false; response_revision:460; number_of_response:1; }","duration":"254.98572ms","start":"2025-11-23T08:45:55.435247Z","end":"2025-11-23T08:45:55.690232Z","steps":["trace[1528482515] 'process raft request'  (duration: 149.467092ms)","trace[1528482515] 'compare'  (duration: 105.254654ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:56.038488Z","caller":"traceutil/trace.go:172","msg":"trace[1300372701] linearizableReadLoop","detail":"{readStateIndex:474; appliedIndex:474; }","duration":"100.916956ms","start":"2025-11-23T08:45:55.937544Z","end":"2025-11-23T08:45:56.038461Z","steps":["trace[1300372701] 'read index received'  (duration: 100.905818ms)","trace[1300372701] 'applied index is now lower than readState.Index'  (duration: 9.741µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:56.038702Z","caller":"traceutil/trace.go:172","msg":"trace[1114811507] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"120.039039ms","start":"2025-11-23T08:45:55.918640Z","end":"2025-11-23T08:45:56.038679Z","steps":["trace[1114811507] 'process raft request'  (duration: 119.866594ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:45:56.038720Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.152426ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-319770\" limit:1 ","response":"range_response_count:1 size:4481"}
	{"level":"info","ts":"2025-11-23T08:45:56.038774Z","caller":"traceutil/trace.go:172","msg":"trace[710127127] range","detail":"{range_begin:/registry/minions/embed-certs-319770; range_end:; response_count:1; response_revision:461; }","duration":"101.230054ms","start":"2025-11-23T08:45:55.937533Z","end":"2025-11-23T08:45:56.038763Z","steps":["trace[710127127] 'agreement among raft nodes before linearized reading'  (duration: 101.013255ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:56.960157Z","caller":"traceutil/trace.go:172","msg":"trace[1292204379] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"130.717683ms","start":"2025-11-23T08:45:56.829417Z","end":"2025-11-23T08:45:56.960135Z","steps":["trace[1292204379] 'process raft request'  (duration: 130.59198ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:57.143193Z","caller":"traceutil/trace.go:172","msg":"trace[31166621] transaction","detail":"{read_only:false; response_revision:466; number_of_response:1; }","duration":"119.023624ms","start":"2025-11-23T08:45:57.024149Z","end":"2025-11-23T08:45:57.143172Z","steps":["trace[31166621] 'process raft request'  (duration: 108.932802ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:45:57.416574Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.977453ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-11-23T08:45:57.416668Z","caller":"traceutil/trace.go:172","msg":"trace[1150367530] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:467; }","duration":"113.069744ms","start":"2025-11-23T08:45:57.303566Z","end":"2025-11-23T08:45:57.416635Z","steps":["trace[1150367530] 'range keys from in-memory index tree'  (duration: 112.804371ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:46:08 up  1:28,  0 user,  load average: 4.05, 3.13, 2.08
	Linux embed-certs-319770 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4454d77969b6bbbc3b66179d1a05d52831ca84f4d98a95048852a9201227cb0c] <==
	I1123 08:45:42.938361       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:45:42.938666       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 08:45:42.938811       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:45:42.938830       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:45:42.938855       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:45:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:45:43.230043       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:45:43.230096       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:45:43.230111       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:45:43.230298       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:45:43.531215       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:45:43.531246       1 metrics.go:72] Registering metrics
	I1123 08:45:43.531318       1 controller.go:711] "Syncing nftables rules"
	I1123 08:45:53.150769       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:45:53.150843       1 main.go:301] handling current node
	I1123 08:46:03.143775       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:46:03.143815       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4a05f14d7bdd8ce645d4bbd1e83e0e54a19e3e1f9a659ee034f61e97ad1459e9] <==
	I1123 08:45:32.410873       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:45:32.422188       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 08:45:32.425673       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:45:32.503023       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1123 08:45:32.503495       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1123 08:45:32.503825       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:45:32.723215       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:45:33.493164       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:45:34.101989       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:45:34.102134       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:45:35.162124       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:45:35.209736       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:45:35.309106       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:45:35.318276       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 08:45:35.320421       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:45:35.327777       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:45:35.356001       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:45:36.207384       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:45:36.217034       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:45:36.224819       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:45:41.061375       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:45:41.065725       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:45:41.157351       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 08:45:41.258542       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1123 08:46:07.094383       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:53876: use of closed network connection
	
	
	==> kube-controller-manager [30e28d8cdad13d87eb3dc82d3e5b3665ac6b0d80b028992178d2afe1a71cc099] <==
	I1123 08:45:40.353572       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 08:45:40.353598       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 08:45:40.353628       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:45:40.353658       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:45:40.354021       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 08:45:40.354948       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:45:40.354984       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:45:40.354988       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:45:40.354995       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:45:40.355014       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:45:40.355045       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 08:45:40.355049       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:45:40.355235       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 08:45:40.355435       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:45:40.355599       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:45:40.355633       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 08:45:40.355791       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 08:45:40.355834       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:45:40.357318       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 08:45:40.357339       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 08:45:40.359260       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:45:40.361315       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 08:45:40.369091       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 08:45:40.373755       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:45:55.433134       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5f7d35ec59fcfd1c9cc2a482ffc8b8ad75e7ee0d38b8f9ba7a317ba6b099effb] <==
	I1123 08:45:42.497539       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:45:42.566337       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:45:42.667337       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:45:42.667385       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 08:45:42.667854       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:45:42.697972       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:45:42.698061       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:45:42.706582       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:45:42.706963       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:45:42.706987       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:45:42.708542       1 config.go:309] "Starting node config controller"
	I1123 08:45:42.708790       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:45:42.708698       1 config.go:200] "Starting service config controller"
	I1123 08:45:42.709093       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:45:42.708745       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:45:42.708735       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:45:42.709128       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:45:42.709132       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:45:42.809055       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:45:42.809593       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:45:42.809639       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:45:42.809661       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [6268b5880694a881db962dc0b505da47995a15a55801cbf297a5676aa7ab6669] <==
	E1123 08:45:32.370999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:45:32.371020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:45:32.371168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:45:32.371217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:45:32.371280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:45:33.238941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:45:33.255417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:45:33.387023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:45:33.437454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:45:33.577416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:45:33.601702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:45:33.624210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:45:33.631428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:45:33.697394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:45:33.719884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:45:33.724202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:45:33.774081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:45:33.781451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:45:33.829002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:45:33.833357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 08:45:33.876376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:45:33.891745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:45:33.925265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:45:33.974005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1123 08:45:36.364722       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:45:40 embed-certs-319770 kubelet[1452]: I1123 08:45:40.319509    1452 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: I1123 08:45:41.286757    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8841647-df8d-4a10-bbbe-96e25fa96a6a-xtables-lock\") pod \"kube-proxy-h9zbj\" (UID: \"b8841647-df8d-4a10-bbbe-96e25fa96a6a\") " pod="kube-system/kube-proxy-h9zbj"
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: I1123 08:45:41.286833    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9eb2add-33ca-4035-9dbb-3505ded226ed-xtables-lock\") pod \"kindnet-vp4s9\" (UID: \"f9eb2add-33ca-4035-9dbb-3505ded226ed\") " pod="kube-system/kindnet-vp4s9"
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: I1123 08:45:41.286861    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwwjb\" (UniqueName: \"kubernetes.io/projected/f9eb2add-33ca-4035-9dbb-3505ded226ed-kube-api-access-hwwjb\") pod \"kindnet-vp4s9\" (UID: \"f9eb2add-33ca-4035-9dbb-3505ded226ed\") " pod="kube-system/kindnet-vp4s9"
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: I1123 08:45:41.286899    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b8841647-df8d-4a10-bbbe-96e25fa96a6a-kube-proxy\") pod \"kube-proxy-h9zbj\" (UID: \"b8841647-df8d-4a10-bbbe-96e25fa96a6a\") " pod="kube-system/kube-proxy-h9zbj"
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: I1123 08:45:41.286945    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8841647-df8d-4a10-bbbe-96e25fa96a6a-lib-modules\") pod \"kube-proxy-h9zbj\" (UID: \"b8841647-df8d-4a10-bbbe-96e25fa96a6a\") " pod="kube-system/kube-proxy-h9zbj"
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: I1123 08:45:41.287003    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9tvd\" (UniqueName: \"kubernetes.io/projected/b8841647-df8d-4a10-bbbe-96e25fa96a6a-kube-api-access-b9tvd\") pod \"kube-proxy-h9zbj\" (UID: \"b8841647-df8d-4a10-bbbe-96e25fa96a6a\") " pod="kube-system/kube-proxy-h9zbj"
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: I1123 08:45:41.287098    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f9eb2add-33ca-4035-9dbb-3505ded226ed-cni-cfg\") pod \"kindnet-vp4s9\" (UID: \"f9eb2add-33ca-4035-9dbb-3505ded226ed\") " pod="kube-system/kindnet-vp4s9"
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: I1123 08:45:41.287137    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9eb2add-33ca-4035-9dbb-3505ded226ed-lib-modules\") pod \"kindnet-vp4s9\" (UID: \"f9eb2add-33ca-4035-9dbb-3505ded226ed\") " pod="kube-system/kindnet-vp4s9"
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: E1123 08:45:41.395435    1452 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: E1123 08:45:41.395474    1452 projected.go:196] Error preparing data for projected volume kube-api-access-hwwjb for pod kube-system/kindnet-vp4s9: configmap "kube-root-ca.crt" not found
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: E1123 08:45:41.395482    1452 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: E1123 08:45:41.395505    1452 projected.go:196] Error preparing data for projected volume kube-api-access-b9tvd for pod kube-system/kube-proxy-h9zbj: configmap "kube-root-ca.crt" not found
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: E1123 08:45:41.395566    1452 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9eb2add-33ca-4035-9dbb-3505ded226ed-kube-api-access-hwwjb podName:f9eb2add-33ca-4035-9dbb-3505ded226ed nodeName:}" failed. No retries permitted until 2025-11-23 08:45:41.895537206 +0000 UTC m=+5.930421263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hwwjb" (UniqueName: "kubernetes.io/projected/f9eb2add-33ca-4035-9dbb-3505ded226ed-kube-api-access-hwwjb") pod "kindnet-vp4s9" (UID: "f9eb2add-33ca-4035-9dbb-3505ded226ed") : configmap "kube-root-ca.crt" not found
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: E1123 08:45:41.395611    1452 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b8841647-df8d-4a10-bbbe-96e25fa96a6a-kube-api-access-b9tvd podName:b8841647-df8d-4a10-bbbe-96e25fa96a6a nodeName:}" failed. No retries permitted until 2025-11-23 08:45:41.895593713 +0000 UTC m=+5.930477752 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b9tvd" (UniqueName: "kubernetes.io/projected/b8841647-df8d-4a10-bbbe-96e25fa96a6a-kube-api-access-b9tvd") pod "kube-proxy-h9zbj" (UID: "b8841647-df8d-4a10-bbbe-96e25fa96a6a") : configmap "kube-root-ca.crt" not found
	Nov 23 08:45:43 embed-certs-319770 kubelet[1452]: I1123 08:45:43.134564    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-vp4s9" podStartSLOduration=2.13453459 podStartE2EDuration="2.13453459s" podCreationTimestamp="2025-11-23 08:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:43.123693663 +0000 UTC m=+7.158577722" watchObservedRunningTime="2025-11-23 08:45:43.13453459 +0000 UTC m=+7.169418649"
	Nov 23 08:45:43 embed-certs-319770 kubelet[1452]: I1123 08:45:43.147207    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h9zbj" podStartSLOduration=2.147180715 podStartE2EDuration="2.147180715s" podCreationTimestamp="2025-11-23 08:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:43.147104931 +0000 UTC m=+7.181988990" watchObservedRunningTime="2025-11-23 08:45:43.147180715 +0000 UTC m=+7.182064779"
	Nov 23 08:45:53 embed-certs-319770 kubelet[1452]: I1123 08:45:53.228558    1452 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:45:53 embed-certs-319770 kubelet[1452]: I1123 08:45:53.375869    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87670385-ded2-45ae-961d-aa678c11ba46-config-volume\") pod \"coredns-66bc5c9577-7h498\" (UID: \"87670385-ded2-45ae-961d-aa678c11ba46\") " pod="kube-system/coredns-66bc5c9577-7h498"
	Nov 23 08:45:53 embed-certs-319770 kubelet[1452]: I1123 08:45:53.375911    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9dl7\" (UniqueName: \"kubernetes.io/projected/ca0a7875-3a86-4485-b78e-497440bd0ce4-kube-api-access-n9dl7\") pod \"storage-provisioner\" (UID: \"ca0a7875-3a86-4485-b78e-497440bd0ce4\") " pod="kube-system/storage-provisioner"
	Nov 23 08:45:53 embed-certs-319770 kubelet[1452]: I1123 08:45:53.375931    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-726ln\" (UniqueName: \"kubernetes.io/projected/87670385-ded2-45ae-961d-aa678c11ba46-kube-api-access-726ln\") pod \"coredns-66bc5c9577-7h498\" (UID: \"87670385-ded2-45ae-961d-aa678c11ba46\") " pod="kube-system/coredns-66bc5c9577-7h498"
	Nov 23 08:45:53 embed-certs-319770 kubelet[1452]: I1123 08:45:53.375944    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ca0a7875-3a86-4485-b78e-497440bd0ce4-tmp\") pod \"storage-provisioner\" (UID: \"ca0a7875-3a86-4485-b78e-497440bd0ce4\") " pod="kube-system/storage-provisioner"
	Nov 23 08:45:54 embed-certs-319770 kubelet[1452]: I1123 08:45:54.156165    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7h498" podStartSLOduration=13.156142638 podStartE2EDuration="13.156142638s" podCreationTimestamp="2025-11-23 08:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:54.156034667 +0000 UTC m=+18.190918725" watchObservedRunningTime="2025-11-23 08:45:54.156142638 +0000 UTC m=+18.191026703"
	Nov 23 08:45:57 embed-certs-319770 kubelet[1452]: I1123 08:45:57.021658    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.021613358 podStartE2EDuration="15.021613358s" podCreationTimestamp="2025-11-23 08:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:54.187165584 +0000 UTC m=+18.222049643" watchObservedRunningTime="2025-11-23 08:45:57.021613358 +0000 UTC m=+21.056497419"
	Nov 23 08:45:57 embed-certs-319770 kubelet[1452]: I1123 08:45:57.200194    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4cnm\" (UniqueName: \"kubernetes.io/projected/e1165604-fe4b-4b63-a3e2-5378a2836868-kube-api-access-h4cnm\") pod \"busybox\" (UID: \"e1165604-fe4b-4b63-a3e2-5378a2836868\") " pod="default/busybox"
	
	
	==> storage-provisioner [01edc02abad606a19c85fea8936232faabe985b45747a6aafb50e2f775b8c9c5] <==
	I1123 08:45:53.806776       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:45:53.820018       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:45:53.820590       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:45:53.824588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:53.833221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:53.833429       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:45:53.833580       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4cc2d284-b966-4474-bbd0-ff4c859e315e", APIVersion:"v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-319770_889d050f-5a76-4842-991f-3fbede1c7961 became leader
	I1123 08:45:53.833620       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-319770_889d050f-5a76-4842-991f-3fbede1c7961!
	W1123 08:45:53.841456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:53.851884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:53.934095       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-319770_889d050f-5a76-4842-991f-3fbede1c7961!
	W1123 08:45:55.915603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:56.040806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:58.045719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:58.054936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:00.058941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:00.064357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:02.068408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:02.073245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:04.077476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:04.081503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:06.084999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:06.089933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:08.096798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:08.102625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-319770 -n embed-certs-319770
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-319770 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-319770
helpers_test.go:243: (dbg) docker inspect embed-certs-319770:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dbe19d3629a912a1e1b33eb3c619d1ae0e29726e3f5d743a66def00eab7afe68",
	        "Created": "2025-11-23T08:45:16.059734305Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 282738,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:45:16.110806257Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/dbe19d3629a912a1e1b33eb3c619d1ae0e29726e3f5d743a66def00eab7afe68/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dbe19d3629a912a1e1b33eb3c619d1ae0e29726e3f5d743a66def00eab7afe68/hostname",
	        "HostsPath": "/var/lib/docker/containers/dbe19d3629a912a1e1b33eb3c619d1ae0e29726e3f5d743a66def00eab7afe68/hosts",
	        "LogPath": "/var/lib/docker/containers/dbe19d3629a912a1e1b33eb3c619d1ae0e29726e3f5d743a66def00eab7afe68/dbe19d3629a912a1e1b33eb3c619d1ae0e29726e3f5d743a66def00eab7afe68-json.log",
	        "Name": "/embed-certs-319770",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-319770:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-319770",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dbe19d3629a912a1e1b33eb3c619d1ae0e29726e3f5d743a66def00eab7afe68",
	                "LowerDir": "/var/lib/docker/overlay2/0f787ff3b62a869d8cae9841b2bb9054d9f115324aace42731b35b65551dc576-init/diff:/var/lib/docker/overlay2/ee04ca8b85d0dedeb02bd9a5189a59a7f53ca89a011d262a78df32fa43bf0598/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0f787ff3b62a869d8cae9841b2bb9054d9f115324aace42731b35b65551dc576/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0f787ff3b62a869d8cae9841b2bb9054d9f115324aace42731b35b65551dc576/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0f787ff3b62a869d8cae9841b2bb9054d9f115324aace42731b35b65551dc576/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-319770",
	                "Source": "/var/lib/docker/volumes/embed-certs-319770/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-319770",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-319770",
	                "name.minikube.sigs.k8s.io": "embed-certs-319770",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6520d3581e59e1fb727b37d1efa4a5f233b63ab2d98f871884b24b7080b39293",
	            "SandboxKey": "/var/run/docker/netns/6520d3581e59",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-319770": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "547544c725d7f45f6b42e2321527d80cabd824b4ef4a7493d17401e268681439",
	                    "EndpointID": "621dee2d113407623ca2acd7f8ef9d8206359ab810192fcfcc9b9dfb7ed05d51",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "9e:4b:59:9a:42:c4",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-319770",
	                        "dbe19d3629a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-319770 -n embed-certs-319770
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-319770 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-319770 logs -n 25: (1.502930685s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p no-preload-999106 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p kubernetes-upgrade-776670 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-776670    │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ start   │ -p kubernetes-upgrade-776670 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-776670    │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p cert-expiration-680868 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-680868       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p kubernetes-upgrade-776670                                                                                                                                                                                                                        │ kubernetes-upgrade-776670    │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p embed-certs-319770 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-319770           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ delete  │ -p cert-expiration-680868                                                                                                                                                                                                                           │ cert-expiration-680868       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p disable-driver-mounts-445958                                                                                                                                                                                                                     │ disable-driver-mounts-445958 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p default-k8s-diff-port-525009 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-525009 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ old-k8s-version-204346 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-204346       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p old-k8s-version-204346 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-204346       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ unpause │ -p old-k8s-version-204346 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-204346       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p old-k8s-version-204346                                                                                                                                                                                                                           │ old-k8s-version-204346       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p old-k8s-version-204346                                                                                                                                                                                                                           │ old-k8s-version-204346       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p newest-cni-399335 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-399335            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ no-preload-999106 image list --format=json                                                                                                                                                                                                          │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p no-preload-999106 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ unpause │ -p no-preload-999106 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p no-preload-999106                                                                                                                                                                                                                                │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p no-preload-999106                                                                                                                                                                                                                                │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p auto-794429 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-794429                  │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-399335 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-399335            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:46 UTC │
	│ stop    │ -p newest-cni-399335 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-399335            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ addons  │ enable dashboard -p newest-cni-399335 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-399335            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ start   │ -p newest-cni-399335 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-399335            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:46:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:46:02.262862  297115 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:46:02.263457  297115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:46:02.263472  297115 out.go:374] Setting ErrFile to fd 2...
	I1123 08:46:02.263479  297115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:46:02.263959  297115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
	I1123 08:46:02.265014  297115 out.go:368] Setting JSON to false
	I1123 08:46:02.266198  297115 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5303,"bootTime":1763882259,"procs":311,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:46:02.266288  297115 start.go:143] virtualization: kvm guest
	I1123 08:46:02.268238  297115 out.go:179] * [newest-cni-399335] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:46:02.270020  297115 notify.go:221] Checking for updates...
	I1123 08:46:02.270024  297115 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:46:02.271482  297115 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:46:02.272843  297115 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-13876/kubeconfig
	I1123 08:46:02.274014  297115 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-13876/.minikube
	I1123 08:46:02.275227  297115 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:46:02.276361  297115 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:46:02.278076  297115 config.go:182] Loaded profile config "newest-cni-399335": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:46:02.278849  297115 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:46:02.305981  297115 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:46:02.306077  297115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:46:02.369456  297115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-23 08:46:02.357744797 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:46:02.369605  297115 docker.go:319] overlay module found
	I1123 08:46:02.371588  297115 out.go:179] * Using the docker driver based on existing profile
	I1123 08:46:02.372889  297115 start.go:309] selected driver: docker
	I1123 08:46:02.372908  297115 start.go:927] validating driver "docker" against &{Name:newest-cni-399335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-399335 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequeste
d:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:46:02.373024  297115 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:46:02.373690  297115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:46:02.434152  297115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-23 08:46:02.423470428 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:46:02.434445  297115 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 08:46:02.434482  297115 cni.go:84] Creating CNI manager for ""
	I1123 08:46:02.434550  297115 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:46:02.434584  297115 start.go:353] cluster config:
	{Name:newest-cni-399335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-399335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:46:02.437216  297115 out.go:179] * Starting "newest-cni-399335" primary control-plane node in "newest-cni-399335" cluster
	I1123 08:46:02.438363  297115 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:46:02.439542  297115 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:46:02.440662  297115 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:46:02.440696  297115 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:46:02.440705  297115 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1123 08:46:02.440721  297115 cache.go:65] Caching tarball of preloaded images
	I1123 08:46:02.440861  297115 preload.go:238] Found /home/jenkins/minikube-integration/21969-13876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1123 08:46:02.440884  297115 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 08:46:02.440996  297115 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335/config.json ...
	I1123 08:46:02.462167  297115 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:46:02.462192  297115 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:46:02.462213  297115 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:46:02.462248  297115 start.go:360] acquireMachinesLock for newest-cni-399335: {Name:mka68fc1b11056460ac5dd4946687e6696340967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:46:02.462317  297115 start.go:364] duration metric: took 44.173µs to acquireMachinesLock for "newest-cni-399335"
	I1123 08:46:02.462339  297115 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:46:02.462349  297115 fix.go:54] fixHost starting: 
	I1123 08:46:02.462592  297115 cli_runner.go:164] Run: docker container inspect newest-cni-399335 --format={{.State.Status}}
	I1123 08:46:02.480611  297115 fix.go:112] recreateIfNeeded on newest-cni-399335: state=Stopped err=<nil>
	W1123 08:46:02.480640  297115 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 08:46:02.037790  293483 out.go:252]   - Generating certificates and keys ...
	I1123 08:46:02.037896  293483 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:46:02.037981  293483 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:46:02.456059  293483 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:46:02.650760  293483 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:46:02.892889  293483 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:46:03.433697  293483 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:46:03.596148  293483 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:46:03.596284  293483 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-794429 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:46:03.904760  293483 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:46:03.904904  293483 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-794429 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:46:04.138573  293483 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:46:04.371416  293483 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:46:04.533631  293483 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:46:04.533727  293483 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:46:05.059932  293483 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:46:05.296891  293483 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:46:05.532157  293483 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:46:05.911922  293483 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:46:06.189126  293483 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:46:06.190020  293483 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:46:06.206499  293483 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:46:06.209148  293483 out.go:252]   - Booting up control plane ...
	I1123 08:46:06.209257  293483 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:46:06.209349  293483 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:46:06.209433  293483 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:46:06.223747  293483 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:46:06.223880  293483 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:46:06.230267  293483 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:46:06.230625  293483 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:46:06.230707  293483 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:46:06.333353  293483 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:46:06.333489  293483 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:46:02.482405  297115 out.go:252] * Restarting existing docker container for "newest-cni-399335" ...
	I1123 08:46:02.482477  297115 cli_runner.go:164] Run: docker start newest-cni-399335
	I1123 08:46:02.785631  297115 cli_runner.go:164] Run: docker container inspect newest-cni-399335 --format={{.State.Status}}
	I1123 08:46:02.807142  297115 kic.go:430] container "newest-cni-399335" state is running.
	I1123 08:46:02.807612  297115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-399335
	I1123 08:46:02.827013  297115 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335/config.json ...
	I1123 08:46:02.827313  297115 machine.go:94] provisionDockerMachine start ...
	I1123 08:46:02.827393  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:02.848474  297115 main.go:143] libmachine: Using SSH client type: native
	I1123 08:46:02.848851  297115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 08:46:02.848869  297115 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:46:02.849609  297115 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48164->127.0.0.1:33098: read: connection reset by peer
	I1123 08:46:05.993595  297115 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-399335
	
	I1123 08:46:05.993630  297115 ubuntu.go:182] provisioning hostname "newest-cni-399335"
	I1123 08:46:05.993706  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.012745  297115 main.go:143] libmachine: Using SSH client type: native
	I1123 08:46:06.012960  297115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 08:46:06.012974  297115 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-399335 && echo "newest-cni-399335" | sudo tee /etc/hostname
	I1123 08:46:06.167781  297115 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-399335
	
	I1123 08:46:06.167881  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.188339  297115 main.go:143] libmachine: Using SSH client type: native
	I1123 08:46:06.188686  297115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 08:46:06.188719  297115 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-399335' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-399335/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-399335' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:46:06.342749  297115 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:46:06.342777  297115 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-13876/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-13876/.minikube}
	I1123 08:46:06.342822  297115 ubuntu.go:190] setting up certificates
	I1123 08:46:06.342839  297115 provision.go:84] configureAuth start
	I1123 08:46:06.342903  297115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-399335
	I1123 08:46:06.364340  297115 provision.go:143] copyHostCerts
	I1123 08:46:06.364416  297115 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-13876/.minikube/ca.pem, removing ...
	I1123 08:46:06.364431  297115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-13876/.minikube/ca.pem
	I1123 08:46:06.364526  297115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-13876/.minikube/ca.pem (1078 bytes)
	I1123 08:46:06.364669  297115 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-13876/.minikube/cert.pem, removing ...
	I1123 08:46:06.364683  297115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-13876/.minikube/cert.pem
	I1123 08:46:06.364724  297115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-13876/.minikube/cert.pem (1123 bytes)
	I1123 08:46:06.364792  297115 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-13876/.minikube/key.pem, removing ...
	I1123 08:46:06.364799  297115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-13876/.minikube/key.pem
	I1123 08:46:06.364823  297115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-13876/.minikube/key.pem (1675 bytes)
	I1123 08:46:06.364877  297115 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-13876/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca-key.pem org=jenkins.newest-cni-399335 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-399335]
	I1123 08:46:06.479812  297115 provision.go:177] copyRemoteCerts
	I1123 08:46:06.479870  297115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:46:06.479911  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.500499  297115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/newest-cni-399335/id_rsa Username:docker}
	I1123 08:46:06.603344  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:46:06.621631  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 08:46:06.640892  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:46:06.659451  297115 provision.go:87] duration metric: took 316.596054ms to configureAuth
	I1123 08:46:06.659481  297115 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:46:06.659806  297115 config.go:182] Loaded profile config "newest-cni-399335": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:46:06.659823  297115 machine.go:97] duration metric: took 3.832490175s to provisionDockerMachine
	I1123 08:46:06.659835  297115 start.go:293] postStartSetup for "newest-cni-399335" (driver="docker")
	I1123 08:46:06.659849  297115 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:46:06.659904  297115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:46:06.659946  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.678221  297115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/newest-cni-399335/id_rsa Username:docker}
	I1123 08:46:06.780370  297115 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:46:06.783936  297115 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:46:06.783965  297115 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:46:06.783976  297115 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-13876/.minikube/addons for local assets ...
	I1123 08:46:06.784034  297115 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-13876/.minikube/files for local assets ...
	I1123 08:46:06.784128  297115 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-13876/.minikube/files/etc/ssl/certs/174422.pem -> 174422.pem in /etc/ssl/certs
	I1123 08:46:06.784237  297115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:46:06.791552  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/files/etc/ssl/certs/174422.pem --> /etc/ssl/certs/174422.pem (1708 bytes)
	I1123 08:46:06.809068  297115 start.go:296] duration metric: took 149.216822ms for postStartSetup
	I1123 08:46:06.809157  297115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:46:06.809195  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.829536  297115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/newest-cni-399335/id_rsa Username:docker}
	I1123 08:46:06.933880  297115 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:46:06.938359  297115 fix.go:56] duration metric: took 4.476004793s for fixHost
	I1123 08:46:06.938381  297115 start.go:83] releasing machines lock for "newest-cni-399335", held for 4.476053793s
	I1123 08:46:06.938445  297115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-399335
	I1123 08:46:06.957272  297115 ssh_runner.go:195] Run: cat /version.json
	I1123 08:46:06.957329  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.957376  297115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:46:06.957477  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.979733  297115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/newest-cni-399335/id_rsa Username:docker}
	I1123 08:46:06.981876  297115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/newest-cni-399335/id_rsa Username:docker}
	I1123 08:46:07.156878  297115 ssh_runner.go:195] Run: systemctl --version
	I1123 08:46:07.164235  297115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:46:07.169524  297115 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:46:07.169588  297115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:46:07.180131  297115 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 08:46:07.180160  297115 start.go:496] detecting cgroup driver to use...
	I1123 08:46:07.180197  297115 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:46:07.180249  297115 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:46:07.202860  297115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:46:07.219930  297115 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:46:07.219994  297115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:46:07.238447  297115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:46:07.254293  297115 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:46:07.365439  297115 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:46:07.535083  297115 docker.go:234] disabling docker service ...
	I1123 08:46:07.535146  297115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:46:07.559983  297115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:46:07.579841  297115 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:46:07.725342  297115 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:46:07.874595  297115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:46:07.893359  297115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:46:07.909897  297115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 08:46:07.920311  297115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:46:07.929226  297115 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1123 08:46:07.929301  297115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1123 08:46:07.938295  297115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:46:07.947245  297115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:46:07.956838  297115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:46:07.968734  297115 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:46:07.979030  297115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:46:07.991079  297115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:46:08.003858  297115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:46:08.015755  297115 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:46:08.025531  297115 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:46:08.038175  297115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:46:08.166785  297115 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:46:08.324792  297115 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:46:08.324876  297115 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:46:08.331799  297115 start.go:564] Will wait 60s for crictl version
	I1123 08:46:08.331870  297115 ssh_runner.go:195] Run: which crictl
	I1123 08:46:08.336854  297115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:46:08.373035  297115 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:46:08.373100  297115 ssh_runner.go:195] Run: containerd --version
	I1123 08:46:08.401111  297115 ssh_runner.go:195] Run: containerd --version
	I1123 08:46:08.430798  297115 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 08:46:08.431908  297115 cli_runner.go:164] Run: docker network inspect newest-cni-399335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:46:08.457541  297115 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1123 08:46:08.464173  297115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:46:08.482189  297115 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1123 08:46:08.483606  297115 kubeadm.go:884] updating cluster {Name:newest-cni-399335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-399335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:46:08.483802  297115 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:46:08.483881  297115 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:46:08.526415  297115 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:46:08.526443  297115 containerd.go:534] Images already preloaded, skipping extraction
	I1123 08:46:08.526514  297115 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:46:08.563009  297115 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:46:08.563033  297115 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:46:08.563042  297115 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 containerd true true} ...
	I1123 08:46:08.563169  297115 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-399335 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-399335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:46:08.563225  297115 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:46:08.602145  297115 cni.go:84] Creating CNI manager for ""
	I1123 08:46:08.602169  297115 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:46:08.602186  297115 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 08:46:08.602215  297115 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-399335 NodeName:newest-cni-399335 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:46:08.602376  297115 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-399335"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:46:08.602455  297115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:46:08.612890  297115 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:46:08.612967  297115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:46:08.627813  297115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1123 08:46:08.647557  297115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:46:08.665131  297115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1123 08:46:08.685078  297115 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:46:08.689721  297115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:46:08.703623  297115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:46:08.833755  297115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:46:08.860209  297115 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335 for IP: 192.168.103.2
	I1123 08:46:08.860232  297115 certs.go:195] generating shared ca certs ...
	I1123 08:46:08.860280  297115 certs.go:227] acquiring lock for ca certs: {Name:mk376e2c25eb30d8b09b93cb4624441e819bcc8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:46:08.860530  297115 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-13876/.minikube/ca.key
	I1123 08:46:08.860612  297115 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-13876/.minikube/proxy-client-ca.key
	I1123 08:46:08.860628  297115 certs.go:257] generating profile certs ...
	I1123 08:46:08.860770  297115 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335/client.key
	I1123 08:46:08.860850  297115 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335/apiserver.key.87937944
	I1123 08:46:08.860905  297115 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335/proxy-client.key
	I1123 08:46:08.861044  297115 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/17442.pem (1338 bytes)
	W1123 08:46:08.861086  297115 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-13876/.minikube/certs/17442_empty.pem, impossibly tiny 0 bytes
	I1123 08:46:08.861100  297115 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:46:08.861136  297115 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:46:08.861175  297115 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:46:08.861210  297115 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/key.pem (1675 bytes)
	I1123 08:46:08.861268  297115 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/files/etc/ssl/certs/174422.pem (1708 bytes)
	I1123 08:46:08.862249  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:46:08.890883  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:46:08.919744  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:46:08.946210  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:46:08.982294  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 08:46:09.019550  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1123 08:46:09.059602  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:46:09.086103  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 08:46:09.114201  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/certs/17442.pem --> /usr/share/ca-certificates/17442.pem (1338 bytes)
	I1123 08:46:09.144268  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/files/etc/ssl/certs/174422.pem --> /usr/share/ca-certificates/174422.pem (1708 bytes)
	I1123 08:46:09.180572  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:46:09.201581  297115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:46:09.215821  297115 ssh_runner.go:195] Run: openssl version
	I1123 08:46:09.223018  297115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17442.pem && ln -fs /usr/share/ca-certificates/17442.pem /etc/ssl/certs/17442.pem"
	I1123 08:46:09.232209  297115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17442.pem
	I1123 08:46:09.236284  297115 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:16 /usr/share/ca-certificates/17442.pem
	I1123 08:46:09.236355  297115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17442.pem
	I1123 08:46:09.272176  297115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17442.pem /etc/ssl/certs/51391683.0"
	I1123 08:46:09.280935  297115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/174422.pem && ln -fs /usr/share/ca-certificates/174422.pem /etc/ssl/certs/174422.pem"
	I1123 08:46:09.290437  297115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/174422.pem
	I1123 08:46:09.294928  297115 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:16 /usr/share/ca-certificates/174422.pem
	I1123 08:46:09.294987  297115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/174422.pem
	I1123 08:46:09.354146  297115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/174422.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:46:09.367905  297115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:46:09.382225  297115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:46:09.388164  297115 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:11 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:46:09.388250  297115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:46:09.430194  297115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:46:09.442291  297115 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:46:09.449422  297115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 08:46:09.526763  297115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 08:46:09.593988  297115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 08:46:09.703010  297115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 08:46:09.789583  297115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 08:46:09.853622  297115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 08:46:09.922029  297115 kubeadm.go:401] StartCluster: {Name:newest-cni-399335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-399335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:46:09.922157  297115 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:46:09.922350  297115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:46:09.996366  297115 cri.go:89] found id: "9919ebcde05f89d535f303aec52924dfae279c686b44f439e70626b754bd1dc1"
	I1123 08:46:09.996395  297115 cri.go:89] found id: "a8e8d9452f805bd93b8852b535449842da46b76733a6d960c13c7e7fb9904a93"
	I1123 08:46:09.996401  297115 cri.go:89] found id: "5e1b307abc766db40a702fcc79877daca7f25a2002af0227cdf38324e7d61865"
	I1123 08:46:09.996405  297115 cri.go:89] found id: "4967495c75cb11716d274c3d149904d55057b5e34909f5df641ba046cc9d8c2d"
	I1123 08:46:09.996408  297115 cri.go:89] found id: "807b76092a2f3826eb0b1f4ffd905f1558564151bbffb289a091369213ac3d66"
	I1123 08:46:09.996413  297115 cri.go:89] found id: "dd4ff42a202e4cece7872b48e65bf636b9f42a17ea01250502b439814c1772f1"
	I1123 08:46:09.996417  297115 cri.go:89] found id: "b8af8e149bfd1a9f0874f56d7c7812838cab58bce566ae2598bf5e99fb470db7"
	I1123 08:46:09.996421  297115 cri.go:89] found id: "b6d21ff2e246be4d70b8875b3b234adeb3b995e2334aab2dfee053c19daa6839"
	I1123 08:46:09.996425  297115 cri.go:89] found id: "bd5d93c8e80e3ae592e10a66d3b65225e8e2900e70d2c4efc9b0e215a576cd66"
	I1123 08:46:09.996434  297115 cri.go:89] found id: "9c2fa9f9f2c324430e4f3e6743e98eeea5c0938f06bb77f15b26511fabdc4fa0"
	I1123 08:46:09.996443  297115 cri.go:89] found id: ""
	I1123 08:46:09.996496  297115 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1123 08:46:10.044196  297115 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"06d5459b22404691663cda906abec3b4d87a28714bef7d08b59632e2c42ac5d2","pid":857,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/06d5459b22404691663cda906abec3b4d87a28714bef7d08b59632e2c42ac5d2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/06d5459b22404691663cda906abec3b4d87a28714bef7d08b59632e2c42ac5d2/rootfs","created":"2025-11-23T08:46:09.614999348Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"06d5459b22404691663cda906abec3b4d87a28714bef7d08b59632e2c42ac5d2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-399335_e75b5303f5682b75c76eb79dcc14c2e7","io.kubernetes.cri.sand
box-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-399335","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e75b5303f5682b75c76eb79dcc14c2e7"},"owner":"root"},{"ociVersion":"1.2.1","id":"4967495c75cb11716d274c3d149904d55057b5e34909f5df641ba046cc9d8c2d","pid":914,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4967495c75cb11716d274c3d149904d55057b5e34909f5df641ba046cc9d8c2d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4967495c75cb11716d274c3d149904d55057b5e34909f5df641ba046cc9d8c2d/rootfs","created":"2025-11-23T08:46:09.790802621Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"e3e680e09796965ca10b46a848cc41c83e73f2f100a5abb48d6d4cd3858989da","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-399335","io.kubernetes.cri.sandbox-namespace":"kub
e-system","io.kubernetes.cri.sandbox-uid":"64ff81d56135c1526673ad753b396633"},"owner":"root"},{"ociVersion":"1.2.1","id":"5e1b307abc766db40a702fcc79877daca7f25a2002af0227cdf38324e7d61865","pid":959,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e1b307abc766db40a702fcc79877daca7f25a2002af0227cdf38324e7d61865","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e1b307abc766db40a702fcc79877daca7f25a2002af0227cdf38324e7d61865/rootfs","created":"2025-11-23T08:46:09.843865397Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"b4f41edbd6308308032f8e835b34e1082e5f179e8e453f10bc315c82d458a740","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-399335","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"265fd1decd3ec114f8f520dd098e0a26"},"owner":"root"},{"ociVersio
n":"1.2.1","id":"8028ac3e5f6457fd538e96a953038d45a2bc1c1c669eea561083507536fe24cc","pid":849,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8028ac3e5f6457fd538e96a953038d45a2bc1c1c669eea561083507536fe24cc","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8028ac3e5f6457fd538e96a953038d45a2bc1c1c669eea561083507536fe24cc/rootfs","created":"2025-11-23T08:46:09.62777907Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"8028ac3e5f6457fd538e96a953038d45a2bc1c1c669eea561083507536fe24cc","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-399335_e7df3d71c3239606fee540d5b72221e3","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-3993
35","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e7df3d71c3239606fee540d5b72221e3"},"owner":"root"},{"ociVersion":"1.2.1","id":"9919ebcde05f89d535f303aec52924dfae279c686b44f439e70626b754bd1dc1","pid":973,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9919ebcde05f89d535f303aec52924dfae279c686b44f439e70626b754bd1dc1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9919ebcde05f89d535f303aec52924dfae279c686b44f439e70626b754bd1dc1/rootfs","created":"2025-11-23T08:46:09.852274123Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"8028ac3e5f6457fd538e96a953038d45a2bc1c1c669eea561083507536fe24cc","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-399335","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e7df3d71c3239606
fee540d5b72221e3"},"owner":"root"},{"ociVersion":"1.2.1","id":"a8e8d9452f805bd93b8852b535449842da46b76733a6d960c13c7e7fb9904a93","pid":974,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8e8d9452f805bd93b8852b535449842da46b76733a6d960c13c7e7fb9904a93","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8e8d9452f805bd93b8852b535449842da46b76733a6d960c13c7e7fb9904a93/rootfs","created":"2025-11-23T08:46:09.859475259Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"06d5459b22404691663cda906abec3b4d87a28714bef7d08b59632e2c42ac5d2","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-399335","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e75b5303f5682b75c76eb79dcc14c2e7"},"owner":"root"},{"ociVersion":"1.2.1","id":"b4f41edbd630830
8032f8e835b34e1082e5f179e8e453f10bc315c82d458a740","pid":864,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4f41edbd6308308032f8e835b34e1082e5f179e8e453f10bc315c82d458a740","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4f41edbd6308308032f8e835b34e1082e5f179e8e453f10bc315c82d458a740/rootfs","created":"2025-11-23T08:46:09.627673902Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"b4f41edbd6308308032f8e835b34e1082e5f179e8e453f10bc315c82d458a740","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-399335_265fd1decd3ec114f8f520dd098e0a26","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-399335","io.kubernetes.cri.sandbox-
namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"265fd1decd3ec114f8f520dd098e0a26"},"owner":"root"},{"ociVersion":"1.2.1","id":"e3e680e09796965ca10b46a848cc41c83e73f2f100a5abb48d6d4cd3858989da","pid":801,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e3e680e09796965ca10b46a848cc41c83e73f2f100a5abb48d6d4cd3858989da","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e3e680e09796965ca10b46a848cc41c83e73f2f100a5abb48d6d4cd3858989da/rootfs","created":"2025-11-23T08:46:09.578352959Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"e3e680e09796965ca10b46a848cc41c83e73f2f100a5abb48d6d4cd3858989da","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-399335_64ff81d56135c1526673ad753b
396633","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-399335","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"64ff81d56135c1526673ad753b396633"},"owner":"root"}]
	I1123 08:46:10.044520  297115 cri.go:126] list returned 8 containers
	I1123 08:46:10.044625  297115 cri.go:129] container: {ID:06d5459b22404691663cda906abec3b4d87a28714bef7d08b59632e2c42ac5d2 Status:running}
	I1123 08:46:10.044665  297115 cri.go:131] skipping 06d5459b22404691663cda906abec3b4d87a28714bef7d08b59632e2c42ac5d2 - not in ps
	I1123 08:46:10.044672  297115 cri.go:129] container: {ID:4967495c75cb11716d274c3d149904d55057b5e34909f5df641ba046cc9d8c2d Status:running}
	I1123 08:46:10.044681  297115 cri.go:135] skipping {4967495c75cb11716d274c3d149904d55057b5e34909f5df641ba046cc9d8c2d running}: state = "running", want "paused"
	I1123 08:46:10.044691  297115 cri.go:129] container: {ID:5e1b307abc766db40a702fcc79877daca7f25a2002af0227cdf38324e7d61865 Status:running}
	I1123 08:46:10.044698  297115 cri.go:135] skipping {5e1b307abc766db40a702fcc79877daca7f25a2002af0227cdf38324e7d61865 running}: state = "running", want "paused"
	I1123 08:46:10.044704  297115 cri.go:129] container: {ID:8028ac3e5f6457fd538e96a953038d45a2bc1c1c669eea561083507536fe24cc Status:running}
	I1123 08:46:10.044712  297115 cri.go:131] skipping 8028ac3e5f6457fd538e96a953038d45a2bc1c1c669eea561083507536fe24cc - not in ps
	I1123 08:46:10.044718  297115 cri.go:129] container: {ID:9919ebcde05f89d535f303aec52924dfae279c686b44f439e70626b754bd1dc1 Status:running}
	I1123 08:46:10.044727  297115 cri.go:135] skipping {9919ebcde05f89d535f303aec52924dfae279c686b44f439e70626b754bd1dc1 running}: state = "running", want "paused"
	I1123 08:46:10.044734  297115 cri.go:129] container: {ID:a8e8d9452f805bd93b8852b535449842da46b76733a6d960c13c7e7fb9904a93 Status:running}
	I1123 08:46:10.044742  297115 cri.go:135] skipping {a8e8d9452f805bd93b8852b535449842da46b76733a6d960c13c7e7fb9904a93 running}: state = "running", want "paused"
	I1123 08:46:10.044748  297115 cri.go:129] container: {ID:b4f41edbd6308308032f8e835b34e1082e5f179e8e453f10bc315c82d458a740 Status:running}
	I1123 08:46:10.044755  297115 cri.go:131] skipping b4f41edbd6308308032f8e835b34e1082e5f179e8e453f10bc315c82d458a740 - not in ps
	I1123 08:46:10.044760  297115 cri.go:129] container: {ID:e3e680e09796965ca10b46a848cc41c83e73f2f100a5abb48d6d4cd3858989da Status:running}
	I1123 08:46:10.044765  297115 cri.go:131] skipping e3e680e09796965ca10b46a848cc41c83e73f2f100a5abb48d6d4cd3858989da - not in ps
	I1123 08:46:10.044825  297115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:46:10.066819  297115 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 08:46:10.066887  297115 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 08:46:10.067303  297115 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 08:46:10.081936  297115 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:46:10.083561  297115 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-399335" does not appear in /home/jenkins/minikube-integration/21969-13876/kubeconfig
	I1123 08:46:10.084688  297115 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-13876/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-399335" cluster setting kubeconfig missing "newest-cni-399335" context setting]
	I1123 08:46:10.086208  297115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/kubeconfig: {Name:mk636046b7146fd65b5638a6d549b76e61f7f055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:46:10.088485  297115 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 08:46:10.097957  297115 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1123 08:46:10.098060  297115 kubeadm.go:602] duration metric: took 31.165606ms to restartPrimaryControlPlane
	I1123 08:46:10.098071  297115 kubeadm.go:403] duration metric: took 176.052287ms to StartCluster
	I1123 08:46:10.098089  297115 settings.go:142] acquiring lock: {Name:mk2c00a8b461754a49d5c7fd5af34c7d1005153a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:46:10.098161  297115 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-13876/kubeconfig
	I1123 08:46:10.100850  297115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/kubeconfig: {Name:mk636046b7146fd65b5638a6d549b76e61f7f055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:46:10.101198  297115 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:46:10.101394  297115 config.go:182] Loaded profile config "newest-cni-399335": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:46:10.101452  297115 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:46:10.101529  297115 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-399335"
	I1123 08:46:10.101545  297115 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-399335"
	W1123 08:46:10.101551  297115 addons.go:248] addon storage-provisioner should already be in state true
	I1123 08:46:10.101577  297115 host.go:66] Checking if "newest-cni-399335" exists ...
	I1123 08:46:10.102036  297115 addons.go:70] Setting default-storageclass=true in profile "newest-cni-399335"
	I1123 08:46:10.102059  297115 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-399335"
	I1123 08:46:10.102076  297115 addons.go:70] Setting dashboard=true in profile "newest-cni-399335"
	I1123 08:46:10.102101  297115 addons.go:239] Setting addon dashboard=true in "newest-cni-399335"
	W1123 08:46:10.102110  297115 addons.go:248] addon dashboard should already be in state true
	I1123 08:46:10.102151  297115 host.go:66] Checking if "newest-cni-399335" exists ...
	I1123 08:46:10.102354  297115 cli_runner.go:164] Run: docker container inspect newest-cni-399335 --format={{.State.Status}}
	I1123 08:46:10.102614  297115 addons.go:70] Setting metrics-server=true in profile "newest-cni-399335"
	I1123 08:46:10.102637  297115 addons.go:239] Setting addon metrics-server=true in "newest-cni-399335"
	W1123 08:46:10.102909  297115 addons.go:248] addon metrics-server should already be in state true
	I1123 08:46:10.102962  297115 host.go:66] Checking if "newest-cni-399335" exists ...
	I1123 08:46:10.102991  297115 out.go:179] * Verifying Kubernetes components...
	I1123 08:46:10.103252  297115 cli_runner.go:164] Run: docker container inspect newest-cni-399335 --format={{.State.Status}}
	I1123 08:46:10.103434  297115 cli_runner.go:164] Run: docker container inspect newest-cni-399335 --format={{.State.Status}}
	I1123 08:46:10.104177  297115 cli_runner.go:164] Run: docker container inspect newest-cni-399335 --format={{.State.Status}}
	I1123 08:46:10.104379  297115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:46:10.140855  297115 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 08:46:10.143699  297115 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 08:46:10.147683  297115 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 08:46:10.147712  297115 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 08:46:10.147784  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:10.155596  297115 addons.go:239] Setting addon default-storageclass=true in "newest-cni-399335"
	W1123 08:46:10.155626  297115 addons.go:248] addon default-storageclass should already be in state true
	I1123 08:46:10.155669  297115 host.go:66] Checking if "newest-cni-399335" exists ...
	I1123 08:46:10.156196  297115 cli_runner.go:164] Run: docker container inspect newest-cni-399335 --format={{.State.Status}}
	I1123 08:46:10.163166  297115 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1123 08:46:10.164251  297115 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	106a6e4800152       56cc512116c8f       10 seconds ago      Running             busybox                   0                   f6fbb8317ccfb       busybox                                      default
	51abe1f942581       52546a367cc9e       17 seconds ago      Running             coredns                   0                   034a1589689fc       coredns-66bc5c9577-7h498                     kube-system
	01edc02abad60       6e38f40d628db       17 seconds ago      Running             storage-provisioner       0                   b24815c1e9982       storage-provisioner                          kube-system
	4454d77969b6b       409467f978b4a       28 seconds ago      Running             kindnet-cni               0                   3786a868a3941       kindnet-vp4s9                                kube-system
	5f7d35ec59fcf       fc25172553d79       28 seconds ago      Running             kube-proxy                0                   0e72adff43b1c       kube-proxy-h9zbj                             kube-system
	4a05f14d7bdd8       c3994bc696102       41 seconds ago      Running             kube-apiserver            0                   93dac5c113aee       kube-apiserver-embed-certs-319770            kube-system
	4bae937f30535       5f1f5298c888d       41 seconds ago      Running             etcd                      0                   8dfea39f8b9da       etcd-embed-certs-319770                      kube-system
	6268b5880694a       7dd6aaa1717ab       41 seconds ago      Running             kube-scheduler            0                   91fc9e3cc49c8       kube-scheduler-embed-certs-319770            kube-system
	30e28d8cdad13       c80c8dbafe7dd       41 seconds ago      Running             kube-controller-manager   0                   c888f01c108cd       kube-controller-manager-embed-certs-319770   kube-system
	
	
	==> containerd <==
	Nov 23 08:45:53 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:53.705329745Z" level=info msg="Container 51abe1f942581ee57b1c60963033d60d4c7fbd657eeeb135280e661659b3d299: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:45:53 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:53.706237623Z" level=info msg="CreateContainer within sandbox \"b24815c1e9982562ecd862c1e928eac7072612bcc07ceebb0b9dca5cac1e2555\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"01edc02abad606a19c85fea8936232faabe985b45747a6aafb50e2f775b8c9c5\""
	Nov 23 08:45:53 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:53.708328309Z" level=info msg="StartContainer for \"01edc02abad606a19c85fea8936232faabe985b45747a6aafb50e2f775b8c9c5\""
	Nov 23 08:45:53 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:53.709417987Z" level=info msg="connecting to shim 01edc02abad606a19c85fea8936232faabe985b45747a6aafb50e2f775b8c9c5" address="unix:///run/containerd/s/f2fe3bf97b2579055220009f5355e1c22f8a0d9c15242c65a8294b5aa8e9f1c8" protocol=ttrpc version=3
	Nov 23 08:45:53 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:53.713552682Z" level=info msg="CreateContainer within sandbox \"034a1589689fc8db7530156d8ffc74cd60bf114449aaba91b90f2ae10780b296\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"51abe1f942581ee57b1c60963033d60d4c7fbd657eeeb135280e661659b3d299\""
	Nov 23 08:45:53 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:53.714234759Z" level=info msg="StartContainer for \"51abe1f942581ee57b1c60963033d60d4c7fbd657eeeb135280e661659b3d299\""
	Nov 23 08:45:53 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:53.715275692Z" level=info msg="connecting to shim 51abe1f942581ee57b1c60963033d60d4c7fbd657eeeb135280e661659b3d299" address="unix:///run/containerd/s/d91b7efa740f749a8dde8dce2e284678361e2e8ba3fc223ea487ab4c30d0babf" protocol=ttrpc version=3
	Nov 23 08:45:53 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:53.785486139Z" level=info msg="StartContainer for \"51abe1f942581ee57b1c60963033d60d4c7fbd657eeeb135280e661659b3d299\" returns successfully"
	Nov 23 08:45:53 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:53.796909387Z" level=info msg="StartContainer for \"01edc02abad606a19c85fea8936232faabe985b45747a6aafb50e2f775b8c9c5\" returns successfully"
	Nov 23 08:45:57 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:57.463534875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:e1165604-fe4b-4b63-a3e2-5378a2836868,Namespace:default,Attempt:0,}"
	Nov 23 08:45:57 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:57.511715997Z" level=info msg="connecting to shim f6fbb8317ccfb036746b1085a92720f6a25bd45d524149e42ef5dba483c9a70a" address="unix:///run/containerd/s/f2fe4ac49a8c960992f3918e6d5932cb35deb80a2c61d776f55aed16bfef660d" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:45:57 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:57.588382305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:e1165604-fe4b-4b63-a3e2-5378a2836868,Namespace:default,Attempt:0,} returns sandbox id \"f6fbb8317ccfb036746b1085a92720f6a25bd45d524149e42ef5dba483c9a70a\""
	Nov 23 08:45:57 embed-certs-319770 containerd[664]: time="2025-11-23T08:45:57.591606421Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.233479333Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.234048382Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396642"
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.235177264Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.237343144Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.237975503Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.64630045s"
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.238023376Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.243391192Z" level=info msg="CreateContainer within sandbox \"f6fbb8317ccfb036746b1085a92720f6a25bd45d524149e42ef5dba483c9a70a\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.252349024Z" level=info msg="Container 106a6e48001520688d64f24e7e505034d5fb1df78656f963c3cf59a72b8bfbe0: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.259619539Z" level=info msg="CreateContainer within sandbox \"f6fbb8317ccfb036746b1085a92720f6a25bd45d524149e42ef5dba483c9a70a\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"106a6e48001520688d64f24e7e505034d5fb1df78656f963c3cf59a72b8bfbe0\""
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.260471267Z" level=info msg="StartContainer for \"106a6e48001520688d64f24e7e505034d5fb1df78656f963c3cf59a72b8bfbe0\""
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.261448400Z" level=info msg="connecting to shim 106a6e48001520688d64f24e7e505034d5fb1df78656f963c3cf59a72b8bfbe0" address="unix:///run/containerd/s/f2fe4ac49a8c960992f3918e6d5932cb35deb80a2c61d776f55aed16bfef660d" protocol=ttrpc version=3
	Nov 23 08:46:00 embed-certs-319770 containerd[664]: time="2025-11-23T08:46:00.323690233Z" level=info msg="StartContainer for \"106a6e48001520688d64f24e7e505034d5fb1df78656f963c3cf59a72b8bfbe0\" returns successfully"
	
	
	==> coredns [51abe1f942581ee57b1c60963033d60d4c7fbd657eeeb135280e661659b3d299] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36834 - 46084 "HINFO IN 8511113068449997383.2530350933837730691. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020758373s
	
	
	==> describe nodes <==
	Name:               embed-certs-319770
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-319770
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=embed-certs-319770
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_45_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:45:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-319770
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:46:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:46:06 +0000   Sun, 23 Nov 2025 08:45:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:46:06 +0000   Sun, 23 Nov 2025 08:45:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:46:06 +0000   Sun, 23 Nov 2025 08:45:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:46:06 +0000   Sun, 23 Nov 2025 08:45:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-319770
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                063f1133-c7d9-4c9a-97e7-c82016e59ce8
	  Boot ID:                    3bab2277-1db4-4284-9fcc-5d1d58e87eb4
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  kube-system                 coredns-66bc5c9577-7h498                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     30s
	  kube-system                 etcd-embed-certs-319770                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-vp4s9                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-embed-certs-319770             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-embed-certs-319770    200m (2%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-h9zbj                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-embed-certs-319770             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  NodeAllocatableEnforced  43s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  42s (x8 over 43s)  kubelet          Node embed-certs-319770 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s (x8 over 43s)  kubelet          Node embed-certs-319770 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x7 over 43s)  kubelet          Node embed-certs-319770 status is now: NodeHasSufficientPID
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  35s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  35s                kubelet          Node embed-certs-319770 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s                kubelet          Node embed-certs-319770 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s                kubelet          Node embed-certs-319770 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s                node-controller  Node embed-certs-319770 event: Registered Node embed-certs-319770 in Controller
	  Normal  NodeReady                18s                kubelet          Node embed-certs-319770 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000998] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.395963] i8042: Warning: Keylock active
	[  +0.012075] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497035] block sda: the capability attribute has been deprecated.
	[  +0.088048] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.022581] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.308229] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [4bae937f30535bc38b9817b68491148a8d9341a43922d73fb1b88ee85c0ddd1e] <==
	{"level":"warn","ts":"2025-11-23T08:45:34.334534Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"209.297274ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-view\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T08:45:34.334610Z","caller":"traceutil/trace.go:172","msg":"trace[301407842] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-view; range_end:; response_count:0; response_revision:64; }","duration":"209.387963ms","start":"2025-11-23T08:45:34.125204Z","end":"2025-11-23T08:45:34.334592Z","steps":["trace[301407842] 'agreement among raft nodes before linearized reading'  (duration: 125.198354ms)","trace[301407842] 'range keys from in-memory index tree'  (duration: 84.061391ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:45:34.334834Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"208.762314ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/embed-certs-319770.187a965cb136ec06\" limit:1 ","response":"range_response_count:1 size:724"}
	{"level":"info","ts":"2025-11-23T08:45:34.334880Z","caller":"traceutil/trace.go:172","msg":"trace[1606856148] range","detail":"{range_begin:/registry/events/default/embed-certs-319770.187a965cb136ec06; range_end:; response_count:1; response_revision:66; }","duration":"208.815497ms","start":"2025-11-23T08:45:34.126053Z","end":"2025-11-23T08:45:34.334868Z","steps":["trace[1606856148] 'agreement among raft nodes before linearized reading'  (duration: 208.676055ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:34.334987Z","caller":"traceutil/trace.go:172","msg":"trace[377839492] transaction","detail":"{read_only:false; response_revision:65; number_of_response:1; }","duration":"231.500525ms","start":"2025-11-23T08:45:34.103471Z","end":"2025-11-23T08:45:34.334971Z","steps":["trace[377839492] 'process raft request'  (duration: 146.853059ms)","trace[377839492] 'compare'  (duration: 84.046912ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:34.335027Z","caller":"traceutil/trace.go:172","msg":"trace[1486139997] transaction","detail":"{read_only:false; response_revision:66; number_of_response:1; }","duration":"229.948829ms","start":"2025-11-23T08:45:34.105066Z","end":"2025-11-23T08:45:34.335015Z","steps":["trace[1486139997] 'process raft request'  (duration: 229.565158ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:34.499013Z","caller":"traceutil/trace.go:172","msg":"trace[1761723829] linearizableReadLoop","detail":"{readStateIndex:75; appliedIndex:75; }","duration":"101.655781ms","start":"2025-11-23T08:45:34.397330Z","end":"2025-11-23T08:45:34.498986Z","steps":["trace[1761723829] 'read index received'  (duration: 101.641841ms)","trace[1761723829] 'applied index is now lower than readState.Index'  (duration: 12.417µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:45:34.562277Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"164.925842ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T08:45:34.562354Z","caller":"traceutil/trace.go:172","msg":"trace[1692685865] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:70; }","duration":"165.013903ms","start":"2025-11-23T08:45:34.397320Z","end":"2025-11-23T08:45:34.562334Z","steps":["trace[1692685865] 'agreement among raft nodes before linearized reading'  (duration: 101.753542ms)","trace[1692685865] 'range keys from in-memory index tree'  (duration: 63.117384ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:34.562487Z","caller":"traceutil/trace.go:172","msg":"trace[160631960] transaction","detail":"{read_only:false; response_revision:74; number_of_response:1; }","duration":"210.517761ms","start":"2025-11-23T08:45:34.351957Z","end":"2025-11-23T08:45:34.562475Z","steps":["trace[160631960] 'process raft request'  (duration: 210.480127ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:34.562481Z","caller":"traceutil/trace.go:172","msg":"trace[1098312375] transaction","detail":"{read_only:false; response_revision:71; number_of_response:1; }","duration":"216.281446ms","start":"2025-11-23T08:45:34.346171Z","end":"2025-11-23T08:45:34.562453Z","steps":["trace[1098312375] 'process raft request'  (duration: 152.869116ms)","trace[1098312375] 'compare'  (duration: 63.163932ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:34.562751Z","caller":"traceutil/trace.go:172","msg":"trace[398310118] transaction","detail":"{read_only:false; response_revision:73; number_of_response:1; }","duration":"211.571953ms","start":"2025-11-23T08:45:34.351164Z","end":"2025-11-23T08:45:34.562735Z","steps":["trace[398310118] 'process raft request'  (duration: 211.242251ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:34.562757Z","caller":"traceutil/trace.go:172","msg":"trace[131933018] transaction","detail":"{read_only:false; response_revision:72; number_of_response:1; }","duration":"215.818014ms","start":"2025-11-23T08:45:34.346926Z","end":"2025-11-23T08:45:34.562744Z","steps":["trace[131933018] 'process raft request'  (duration: 215.418776ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:45:55.690057Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.602958ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T08:45:55.690160Z","caller":"traceutil/trace.go:172","msg":"trace[1245239545] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:459; }","duration":"151.713356ms","start":"2025-11-23T08:45:55.538418Z","end":"2025-11-23T08:45:55.690132Z","steps":["trace[1245239545] 'agreement among raft nodes before linearized reading'  (duration: 46.210767ms)","trace[1245239545] 'range keys from in-memory index tree'  (duration: 105.341272ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:45:55.690157Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.383202ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356836321523809 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.187a9662dd577ca2\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.187a9662dd577ca2\" value_size:606 lease:6414984799466747063 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T08:45:55.690244Z","caller":"traceutil/trace.go:172","msg":"trace[1528482515] transaction","detail":"{read_only:false; response_revision:460; number_of_response:1; }","duration":"254.98572ms","start":"2025-11-23T08:45:55.435247Z","end":"2025-11-23T08:45:55.690232Z","steps":["trace[1528482515] 'process raft request'  (duration: 149.467092ms)","trace[1528482515] 'compare'  (duration: 105.254654ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:56.038488Z","caller":"traceutil/trace.go:172","msg":"trace[1300372701] linearizableReadLoop","detail":"{readStateIndex:474; appliedIndex:474; }","duration":"100.916956ms","start":"2025-11-23T08:45:55.937544Z","end":"2025-11-23T08:45:56.038461Z","steps":["trace[1300372701] 'read index received'  (duration: 100.905818ms)","trace[1300372701] 'applied index is now lower than readState.Index'  (duration: 9.741µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:56.038702Z","caller":"traceutil/trace.go:172","msg":"trace[1114811507] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"120.039039ms","start":"2025-11-23T08:45:55.918640Z","end":"2025-11-23T08:45:56.038679Z","steps":["trace[1114811507] 'process raft request'  (duration: 119.866594ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:45:56.038720Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.152426ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-319770\" limit:1 ","response":"range_response_count:1 size:4481"}
	{"level":"info","ts":"2025-11-23T08:45:56.038774Z","caller":"traceutil/trace.go:172","msg":"trace[710127127] range","detail":"{range_begin:/registry/minions/embed-certs-319770; range_end:; response_count:1; response_revision:461; }","duration":"101.230054ms","start":"2025-11-23T08:45:55.937533Z","end":"2025-11-23T08:45:56.038763Z","steps":["trace[710127127] 'agreement among raft nodes before linearized reading'  (duration: 101.013255ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:56.960157Z","caller":"traceutil/trace.go:172","msg":"trace[1292204379] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"130.717683ms","start":"2025-11-23T08:45:56.829417Z","end":"2025-11-23T08:45:56.960135Z","steps":["trace[1292204379] 'process raft request'  (duration: 130.59198ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:57.143193Z","caller":"traceutil/trace.go:172","msg":"trace[31166621] transaction","detail":"{read_only:false; response_revision:466; number_of_response:1; }","duration":"119.023624ms","start":"2025-11-23T08:45:57.024149Z","end":"2025-11-23T08:45:57.143172Z","steps":["trace[31166621] 'process raft request'  (duration: 108.932802ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:45:57.416574Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.977453ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-11-23T08:45:57.416668Z","caller":"traceutil/trace.go:172","msg":"trace[1150367530] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:467; }","duration":"113.069744ms","start":"2025-11-23T08:45:57.303566Z","end":"2025-11-23T08:45:57.416635Z","steps":["trace[1150367530] 'range keys from in-memory index tree'  (duration: 112.804371ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:46:11 up  1:28,  0 user,  load average: 4.85, 3.31, 2.15
	Linux embed-certs-319770 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4454d77969b6bbbc3b66179d1a05d52831ca84f4d98a95048852a9201227cb0c] <==
	I1123 08:45:42.938361       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:45:42.938666       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 08:45:42.938811       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:45:42.938830       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:45:42.938855       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:45:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:45:43.230043       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:45:43.230096       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:45:43.230111       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:45:43.230298       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:45:43.531215       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:45:43.531246       1 metrics.go:72] Registering metrics
	I1123 08:45:43.531318       1 controller.go:711] "Syncing nftables rules"
	I1123 08:45:53.150769       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:45:53.150843       1 main.go:301] handling current node
	I1123 08:46:03.143775       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:46:03.143815       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4a05f14d7bdd8ce645d4bbd1e83e0e54a19e3e1f9a659ee034f61e97ad1459e9] <==
	I1123 08:45:32.410873       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:45:32.422188       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 08:45:32.425673       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:45:32.503023       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1123 08:45:32.503495       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1123 08:45:32.503825       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:45:32.723215       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:45:33.493164       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:45:34.101989       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:45:34.102134       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:45:35.162124       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:45:35.209736       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:45:35.309106       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:45:35.318276       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 08:45:35.320421       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:45:35.327777       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:45:35.356001       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:45:36.207384       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:45:36.217034       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:45:36.224819       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:45:41.061375       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:45:41.065725       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:45:41.157351       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 08:45:41.258542       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1123 08:46:07.094383       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:53876: use of closed network connection
	
	
	==> kube-controller-manager [30e28d8cdad13d87eb3dc82d3e5b3665ac6b0d80b028992178d2afe1a71cc099] <==
	I1123 08:45:40.353572       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 08:45:40.353598       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 08:45:40.353628       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:45:40.353658       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:45:40.354021       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 08:45:40.354948       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:45:40.354984       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:45:40.354988       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:45:40.354995       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:45:40.355014       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:45:40.355045       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 08:45:40.355049       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:45:40.355235       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 08:45:40.355435       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:45:40.355599       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:45:40.355633       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 08:45:40.355791       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 08:45:40.355834       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:45:40.357318       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 08:45:40.357339       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 08:45:40.359260       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:45:40.361315       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 08:45:40.369091       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 08:45:40.373755       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:45:55.433134       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5f7d35ec59fcfd1c9cc2a482ffc8b8ad75e7ee0d38b8f9ba7a317ba6b099effb] <==
	I1123 08:45:42.497539       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:45:42.566337       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:45:42.667337       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:45:42.667385       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 08:45:42.667854       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:45:42.697972       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:45:42.698061       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:45:42.706582       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:45:42.706963       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:45:42.706987       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:45:42.708542       1 config.go:309] "Starting node config controller"
	I1123 08:45:42.708790       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:45:42.708698       1 config.go:200] "Starting service config controller"
	I1123 08:45:42.709093       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:45:42.708745       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:45:42.708735       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:45:42.709128       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:45:42.709132       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:45:42.809055       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:45:42.809593       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:45:42.809639       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:45:42.809661       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [6268b5880694a881db962dc0b505da47995a15a55801cbf297a5676aa7ab6669] <==
	E1123 08:45:32.370999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:45:32.371020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:45:32.371168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:45:32.371217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:45:32.371280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:45:33.238941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:45:33.255417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:45:33.387023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:45:33.437454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:45:33.577416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:45:33.601702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:45:33.624210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:45:33.631428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:45:33.697394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:45:33.719884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:45:33.724202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:45:33.774081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:45:33.781451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:45:33.829002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:45:33.833357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 08:45:33.876376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:45:33.891745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:45:33.925265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:45:33.974005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1123 08:45:36.364722       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:45:40 embed-certs-319770 kubelet[1452]: I1123 08:45:40.319509    1452 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: I1123 08:45:41.286757    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8841647-df8d-4a10-bbbe-96e25fa96a6a-xtables-lock\") pod \"kube-proxy-h9zbj\" (UID: \"b8841647-df8d-4a10-bbbe-96e25fa96a6a\") " pod="kube-system/kube-proxy-h9zbj"
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: I1123 08:45:41.286833    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9eb2add-33ca-4035-9dbb-3505ded226ed-xtables-lock\") pod \"kindnet-vp4s9\" (UID: \"f9eb2add-33ca-4035-9dbb-3505ded226ed\") " pod="kube-system/kindnet-vp4s9"
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: I1123 08:45:41.286861    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwwjb\" (UniqueName: \"kubernetes.io/projected/f9eb2add-33ca-4035-9dbb-3505ded226ed-kube-api-access-hwwjb\") pod \"kindnet-vp4s9\" (UID: \"f9eb2add-33ca-4035-9dbb-3505ded226ed\") " pod="kube-system/kindnet-vp4s9"
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: I1123 08:45:41.286899    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b8841647-df8d-4a10-bbbe-96e25fa96a6a-kube-proxy\") pod \"kube-proxy-h9zbj\" (UID: \"b8841647-df8d-4a10-bbbe-96e25fa96a6a\") " pod="kube-system/kube-proxy-h9zbj"
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: I1123 08:45:41.286945    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8841647-df8d-4a10-bbbe-96e25fa96a6a-lib-modules\") pod \"kube-proxy-h9zbj\" (UID: \"b8841647-df8d-4a10-bbbe-96e25fa96a6a\") " pod="kube-system/kube-proxy-h9zbj"
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: I1123 08:45:41.287003    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9tvd\" (UniqueName: \"kubernetes.io/projected/b8841647-df8d-4a10-bbbe-96e25fa96a6a-kube-api-access-b9tvd\") pod \"kube-proxy-h9zbj\" (UID: \"b8841647-df8d-4a10-bbbe-96e25fa96a6a\") " pod="kube-system/kube-proxy-h9zbj"
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: I1123 08:45:41.287098    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f9eb2add-33ca-4035-9dbb-3505ded226ed-cni-cfg\") pod \"kindnet-vp4s9\" (UID: \"f9eb2add-33ca-4035-9dbb-3505ded226ed\") " pod="kube-system/kindnet-vp4s9"
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: I1123 08:45:41.287137    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9eb2add-33ca-4035-9dbb-3505ded226ed-lib-modules\") pod \"kindnet-vp4s9\" (UID: \"f9eb2add-33ca-4035-9dbb-3505ded226ed\") " pod="kube-system/kindnet-vp4s9"
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: E1123 08:45:41.395435    1452 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: E1123 08:45:41.395474    1452 projected.go:196] Error preparing data for projected volume kube-api-access-hwwjb for pod kube-system/kindnet-vp4s9: configmap "kube-root-ca.crt" not found
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: E1123 08:45:41.395482    1452 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: E1123 08:45:41.395505    1452 projected.go:196] Error preparing data for projected volume kube-api-access-b9tvd for pod kube-system/kube-proxy-h9zbj: configmap "kube-root-ca.crt" not found
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: E1123 08:45:41.395566    1452 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9eb2add-33ca-4035-9dbb-3505ded226ed-kube-api-access-hwwjb podName:f9eb2add-33ca-4035-9dbb-3505ded226ed nodeName:}" failed. No retries permitted until 2025-11-23 08:45:41.895537206 +0000 UTC m=+5.930421263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hwwjb" (UniqueName: "kubernetes.io/projected/f9eb2add-33ca-4035-9dbb-3505ded226ed-kube-api-access-hwwjb") pod "kindnet-vp4s9" (UID: "f9eb2add-33ca-4035-9dbb-3505ded226ed") : configmap "kube-root-ca.crt" not found
	Nov 23 08:45:41 embed-certs-319770 kubelet[1452]: E1123 08:45:41.395611    1452 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b8841647-df8d-4a10-bbbe-96e25fa96a6a-kube-api-access-b9tvd podName:b8841647-df8d-4a10-bbbe-96e25fa96a6a nodeName:}" failed. No retries permitted until 2025-11-23 08:45:41.895593713 +0000 UTC m=+5.930477752 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b9tvd" (UniqueName: "kubernetes.io/projected/b8841647-df8d-4a10-bbbe-96e25fa96a6a-kube-api-access-b9tvd") pod "kube-proxy-h9zbj" (UID: "b8841647-df8d-4a10-bbbe-96e25fa96a6a") : configmap "kube-root-ca.crt" not found
	Nov 23 08:45:43 embed-certs-319770 kubelet[1452]: I1123 08:45:43.134564    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-vp4s9" podStartSLOduration=2.13453459 podStartE2EDuration="2.13453459s" podCreationTimestamp="2025-11-23 08:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:43.123693663 +0000 UTC m=+7.158577722" watchObservedRunningTime="2025-11-23 08:45:43.13453459 +0000 UTC m=+7.169418649"
	Nov 23 08:45:43 embed-certs-319770 kubelet[1452]: I1123 08:45:43.147207    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h9zbj" podStartSLOduration=2.147180715 podStartE2EDuration="2.147180715s" podCreationTimestamp="2025-11-23 08:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:43.147104931 +0000 UTC m=+7.181988990" watchObservedRunningTime="2025-11-23 08:45:43.147180715 +0000 UTC m=+7.182064779"
	Nov 23 08:45:53 embed-certs-319770 kubelet[1452]: I1123 08:45:53.228558    1452 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:45:53 embed-certs-319770 kubelet[1452]: I1123 08:45:53.375869    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87670385-ded2-45ae-961d-aa678c11ba46-config-volume\") pod \"coredns-66bc5c9577-7h498\" (UID: \"87670385-ded2-45ae-961d-aa678c11ba46\") " pod="kube-system/coredns-66bc5c9577-7h498"
	Nov 23 08:45:53 embed-certs-319770 kubelet[1452]: I1123 08:45:53.375911    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9dl7\" (UniqueName: \"kubernetes.io/projected/ca0a7875-3a86-4485-b78e-497440bd0ce4-kube-api-access-n9dl7\") pod \"storage-provisioner\" (UID: \"ca0a7875-3a86-4485-b78e-497440bd0ce4\") " pod="kube-system/storage-provisioner"
	Nov 23 08:45:53 embed-certs-319770 kubelet[1452]: I1123 08:45:53.375931    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-726ln\" (UniqueName: \"kubernetes.io/projected/87670385-ded2-45ae-961d-aa678c11ba46-kube-api-access-726ln\") pod \"coredns-66bc5c9577-7h498\" (UID: \"87670385-ded2-45ae-961d-aa678c11ba46\") " pod="kube-system/coredns-66bc5c9577-7h498"
	Nov 23 08:45:53 embed-certs-319770 kubelet[1452]: I1123 08:45:53.375944    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ca0a7875-3a86-4485-b78e-497440bd0ce4-tmp\") pod \"storage-provisioner\" (UID: \"ca0a7875-3a86-4485-b78e-497440bd0ce4\") " pod="kube-system/storage-provisioner"
	Nov 23 08:45:54 embed-certs-319770 kubelet[1452]: I1123 08:45:54.156165    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7h498" podStartSLOduration=13.156142638 podStartE2EDuration="13.156142638s" podCreationTimestamp="2025-11-23 08:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:54.156034667 +0000 UTC m=+18.190918725" watchObservedRunningTime="2025-11-23 08:45:54.156142638 +0000 UTC m=+18.191026703"
	Nov 23 08:45:57 embed-certs-319770 kubelet[1452]: I1123 08:45:57.021658    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.021613358 podStartE2EDuration="15.021613358s" podCreationTimestamp="2025-11-23 08:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:54.187165584 +0000 UTC m=+18.222049643" watchObservedRunningTime="2025-11-23 08:45:57.021613358 +0000 UTC m=+21.056497419"
	Nov 23 08:45:57 embed-certs-319770 kubelet[1452]: I1123 08:45:57.200194    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4cnm\" (UniqueName: \"kubernetes.io/projected/e1165604-fe4b-4b63-a3e2-5378a2836868-kube-api-access-h4cnm\") pod \"busybox\" (UID: \"e1165604-fe4b-4b63-a3e2-5378a2836868\") " pod="default/busybox"
	
	
	==> storage-provisioner [01edc02abad606a19c85fea8936232faabe985b45747a6aafb50e2f775b8c9c5] <==
	I1123 08:45:53.820590       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:45:53.824588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:53.833221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:53.833429       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:45:53.833580       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4cc2d284-b966-4474-bbd0-ff4c859e315e", APIVersion:"v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-319770_889d050f-5a76-4842-991f-3fbede1c7961 became leader
	I1123 08:45:53.833620       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-319770_889d050f-5a76-4842-991f-3fbede1c7961!
	W1123 08:45:53.841456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:53.851884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:53.934095       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-319770_889d050f-5a76-4842-991f-3fbede1c7961!
	W1123 08:45:55.915603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:56.040806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:58.045719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:58.054936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:00.058941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:00.064357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:02.068408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:02.073245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:04.077476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:04.081503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:06.084999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:06.089933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:08.096798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:08.102625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:10.121948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:10.151252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-319770 -n embed-certs-319770
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-319770 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (15.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (15.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-525009 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [32796117-bb98-432a-add6-234fb1c63a55] Pending
helpers_test.go:352: "busybox" [32796117-bb98-432a-add6-234fb1c63a55] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [32796117-bb98-432a-add6-234fb1c63a55] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00585451s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-525009 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-525009
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-525009:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c9feb33c3b710bf9f778cdd38f08b9b0f992116d2ccf8e2a2f0b6c0a72ca3dd4",
	        "Created": "2025-11-23T08:45:20.641806263Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 283766,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:45:20.679171009Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/c9feb33c3b710bf9f778cdd38f08b9b0f992116d2ccf8e2a2f0b6c0a72ca3dd4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c9feb33c3b710bf9f778cdd38f08b9b0f992116d2ccf8e2a2f0b6c0a72ca3dd4/hostname",
	        "HostsPath": "/var/lib/docker/containers/c9feb33c3b710bf9f778cdd38f08b9b0f992116d2ccf8e2a2f0b6c0a72ca3dd4/hosts",
	        "LogPath": "/var/lib/docker/containers/c9feb33c3b710bf9f778cdd38f08b9b0f992116d2ccf8e2a2f0b6c0a72ca3dd4/c9feb33c3b710bf9f778cdd38f08b9b0f992116d2ccf8e2a2f0b6c0a72ca3dd4-json.log",
	        "Name": "/default-k8s-diff-port-525009",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-525009:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-525009",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c9feb33c3b710bf9f778cdd38f08b9b0f992116d2ccf8e2a2f0b6c0a72ca3dd4",
	                "LowerDir": "/var/lib/docker/overlay2/896af49d8e5454bea56efcdeacb288ebf1e46fb5df5e36f4c6bfb56731dcf18f-init/diff:/var/lib/docker/overlay2/ee04ca8b85d0dedeb02bd9a5189a59a7f53ca89a011d262a78df32fa43bf0598/diff",
	                "MergedDir": "/var/lib/docker/overlay2/896af49d8e5454bea56efcdeacb288ebf1e46fb5df5e36f4c6bfb56731dcf18f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/896af49d8e5454bea56efcdeacb288ebf1e46fb5df5e36f4c6bfb56731dcf18f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/896af49d8e5454bea56efcdeacb288ebf1e46fb5df5e36f4c6bfb56731dcf18f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-525009",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-525009/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-525009",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-525009",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-525009",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7e3254f93dec94d01254ed99a821592ff1a2b8997cc392fea2a55ec154a91263",
	            "SandboxKey": "/var/run/docker/netns/7e3254f93dec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-525009": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "60ec8d60a8fd3e2002b03ba35b0307bc7ba2f9470e8b1fef7d49e93a9a89f067",
	                    "EndpointID": "c2e90db3ff011874579de78f59f4973814bb31fa386c0b52a53da5cefc6325ad",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "12:fc:2e:80:5c:38",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-525009",
	                        "c9feb33c3b71"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-525009 -n default-k8s-diff-port-525009
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-525009 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-525009 logs -n 25: (1.418492645s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p no-preload-999106 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p kubernetes-upgrade-776670 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-776670    │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ start   │ -p kubernetes-upgrade-776670 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-776670    │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p cert-expiration-680868 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-680868       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p kubernetes-upgrade-776670                                                                                                                                                                                                                        │ kubernetes-upgrade-776670    │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p embed-certs-319770 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-319770           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ delete  │ -p cert-expiration-680868                                                                                                                                                                                                                           │ cert-expiration-680868       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p disable-driver-mounts-445958                                                                                                                                                                                                                     │ disable-driver-mounts-445958 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p default-k8s-diff-port-525009 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-525009 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ old-k8s-version-204346 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-204346       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p old-k8s-version-204346 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-204346       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ unpause │ -p old-k8s-version-204346 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-204346       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p old-k8s-version-204346                                                                                                                                                                                                                           │ old-k8s-version-204346       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p old-k8s-version-204346                                                                                                                                                                                                                           │ old-k8s-version-204346       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p newest-cni-399335 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-399335            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ no-preload-999106 image list --format=json                                                                                                                                                                                                          │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p no-preload-999106 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ unpause │ -p no-preload-999106 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p no-preload-999106                                                                                                                                                                                                                                │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p no-preload-999106                                                                                                                                                                                                                                │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p auto-794429 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-794429                  │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-399335 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-399335            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:46 UTC │
	│ stop    │ -p newest-cni-399335 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-399335            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ addons  │ enable dashboard -p newest-cni-399335 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-399335            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ start   │ -p newest-cni-399335 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-399335            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:46:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:46:02.262862  297115 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:46:02.263457  297115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:46:02.263472  297115 out.go:374] Setting ErrFile to fd 2...
	I1123 08:46:02.263479  297115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:46:02.263959  297115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
	I1123 08:46:02.265014  297115 out.go:368] Setting JSON to false
	I1123 08:46:02.266198  297115 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5303,"bootTime":1763882259,"procs":311,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:46:02.266288  297115 start.go:143] virtualization: kvm guest
	I1123 08:46:02.268238  297115 out.go:179] * [newest-cni-399335] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:46:02.270020  297115 notify.go:221] Checking for updates...
	I1123 08:46:02.270024  297115 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:46:02.271482  297115 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:46:02.272843  297115 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-13876/kubeconfig
	I1123 08:46:02.274014  297115 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-13876/.minikube
	I1123 08:46:02.275227  297115 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:46:02.276361  297115 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:46:02.278076  297115 config.go:182] Loaded profile config "newest-cni-399335": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:46:02.278849  297115 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:46:02.305981  297115 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:46:02.306077  297115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:46:02.369456  297115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-23 08:46:02.357744797 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:46:02.369605  297115 docker.go:319] overlay module found
	I1123 08:46:02.371588  297115 out.go:179] * Using the docker driver based on existing profile
	I1123 08:46:02.372889  297115 start.go:309] selected driver: docker
	I1123 08:46:02.372908  297115 start.go:927] validating driver "docker" against &{Name:newest-cni-399335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-399335 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequeste
d:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:46:02.373024  297115 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:46:02.373690  297115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:46:02.434152  297115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-23 08:46:02.423470428 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:46:02.434445  297115 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 08:46:02.434482  297115 cni.go:84] Creating CNI manager for ""
	I1123 08:46:02.434550  297115 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:46:02.434584  297115 start.go:353] cluster config:
	{Name:newest-cni-399335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-399335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:46:02.437216  297115 out.go:179] * Starting "newest-cni-399335" primary control-plane node in "newest-cni-399335" cluster
	I1123 08:46:02.438363  297115 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:46:02.439542  297115 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:46:02.440662  297115 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:46:02.440696  297115 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:46:02.440705  297115 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1123 08:46:02.440721  297115 cache.go:65] Caching tarball of preloaded images
	I1123 08:46:02.440861  297115 preload.go:238] Found /home/jenkins/minikube-integration/21969-13876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1123 08:46:02.440884  297115 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 08:46:02.440996  297115 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335/config.json ...
	I1123 08:46:02.462167  297115 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:46:02.462192  297115 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:46:02.462213  297115 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:46:02.462248  297115 start.go:360] acquireMachinesLock for newest-cni-399335: {Name:mka68fc1b11056460ac5dd4946687e6696340967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:46:02.462317  297115 start.go:364] duration metric: took 44.173µs to acquireMachinesLock for "newest-cni-399335"
	I1123 08:46:02.462339  297115 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:46:02.462349  297115 fix.go:54] fixHost starting: 
	I1123 08:46:02.462592  297115 cli_runner.go:164] Run: docker container inspect newest-cni-399335 --format={{.State.Status}}
	I1123 08:46:02.480611  297115 fix.go:112] recreateIfNeeded on newest-cni-399335: state=Stopped err=<nil>
	W1123 08:46:02.480640  297115 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 08:46:02.037790  293483 out.go:252]   - Generating certificates and keys ...
	I1123 08:46:02.037896  293483 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:46:02.037981  293483 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:46:02.456059  293483 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:46:02.650760  293483 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:46:02.892889  293483 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:46:03.433697  293483 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:46:03.596148  293483 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:46:03.596284  293483 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-794429 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:46:03.904760  293483 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:46:03.904904  293483 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-794429 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:46:04.138573  293483 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:46:04.371416  293483 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:46:04.533631  293483 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:46:04.533727  293483 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:46:05.059932  293483 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:46:05.296891  293483 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:46:05.532157  293483 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:46:05.911922  293483 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:46:06.189126  293483 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:46:06.190020  293483 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:46:06.206499  293483 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:46:06.209148  293483 out.go:252]   - Booting up control plane ...
	I1123 08:46:06.209257  293483 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:46:06.209349  293483 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:46:06.209433  293483 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:46:06.223747  293483 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:46:06.223880  293483 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:46:06.230267  293483 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:46:06.230625  293483 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:46:06.230707  293483 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:46:06.333353  293483 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:46:06.333489  293483 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:46:02.482405  297115 out.go:252] * Restarting existing docker container for "newest-cni-399335" ...
	I1123 08:46:02.482477  297115 cli_runner.go:164] Run: docker start newest-cni-399335
	I1123 08:46:02.785631  297115 cli_runner.go:164] Run: docker container inspect newest-cni-399335 --format={{.State.Status}}
	I1123 08:46:02.807142  297115 kic.go:430] container "newest-cni-399335" state is running.
	I1123 08:46:02.807612  297115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-399335
	I1123 08:46:02.827013  297115 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335/config.json ...
	I1123 08:46:02.827313  297115 machine.go:94] provisionDockerMachine start ...
	I1123 08:46:02.827393  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:02.848474  297115 main.go:143] libmachine: Using SSH client type: native
	I1123 08:46:02.848851  297115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 08:46:02.848869  297115 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:46:02.849609  297115 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48164->127.0.0.1:33098: read: connection reset by peer
	I1123 08:46:05.993595  297115 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-399335
	
	I1123 08:46:05.993630  297115 ubuntu.go:182] provisioning hostname "newest-cni-399335"
	I1123 08:46:05.993706  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.012745  297115 main.go:143] libmachine: Using SSH client type: native
	I1123 08:46:06.012960  297115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 08:46:06.012974  297115 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-399335 && echo "newest-cni-399335" | sudo tee /etc/hostname
	I1123 08:46:06.167781  297115 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-399335
	
	I1123 08:46:06.167881  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.188339  297115 main.go:143] libmachine: Using SSH client type: native
	I1123 08:46:06.188686  297115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 08:46:06.188719  297115 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-399335' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-399335/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-399335' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:46:06.342749  297115 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:46:06.342777  297115 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-13876/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-13876/.minikube}
	I1123 08:46:06.342822  297115 ubuntu.go:190] setting up certificates
	I1123 08:46:06.342839  297115 provision.go:84] configureAuth start
	I1123 08:46:06.342903  297115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-399335
	I1123 08:46:06.364340  297115 provision.go:143] copyHostCerts
	I1123 08:46:06.364416  297115 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-13876/.minikube/ca.pem, removing ...
	I1123 08:46:06.364431  297115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-13876/.minikube/ca.pem
	I1123 08:46:06.364526  297115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-13876/.minikube/ca.pem (1078 bytes)
	I1123 08:46:06.364669  297115 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-13876/.minikube/cert.pem, removing ...
	I1123 08:46:06.364683  297115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-13876/.minikube/cert.pem
	I1123 08:46:06.364724  297115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-13876/.minikube/cert.pem (1123 bytes)
	I1123 08:46:06.364792  297115 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-13876/.minikube/key.pem, removing ...
	I1123 08:46:06.364799  297115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-13876/.minikube/key.pem
	I1123 08:46:06.364823  297115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-13876/.minikube/key.pem (1675 bytes)
	I1123 08:46:06.364877  297115 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-13876/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca-key.pem org=jenkins.newest-cni-399335 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-399335]
	I1123 08:46:06.479812  297115 provision.go:177] copyRemoteCerts
	I1123 08:46:06.479870  297115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:46:06.479911  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.500499  297115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/newest-cni-399335/id_rsa Username:docker}
	I1123 08:46:06.603344  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:46:06.621631  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 08:46:06.640892  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:46:06.659451  297115 provision.go:87] duration metric: took 316.596054ms to configureAuth
	I1123 08:46:06.659481  297115 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:46:06.659806  297115 config.go:182] Loaded profile config "newest-cni-399335": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:46:06.659823  297115 machine.go:97] duration metric: took 3.832490175s to provisionDockerMachine
	I1123 08:46:06.659835  297115 start.go:293] postStartSetup for "newest-cni-399335" (driver="docker")
	I1123 08:46:06.659849  297115 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:46:06.659904  297115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:46:06.659946  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.678221  297115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/newest-cni-399335/id_rsa Username:docker}
	I1123 08:46:06.780370  297115 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:46:06.783936  297115 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:46:06.783965  297115 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:46:06.783976  297115 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-13876/.minikube/addons for local assets ...
	I1123 08:46:06.784034  297115 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-13876/.minikube/files for local assets ...
	I1123 08:46:06.784128  297115 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-13876/.minikube/files/etc/ssl/certs/174422.pem -> 174422.pem in /etc/ssl/certs
	I1123 08:46:06.784237  297115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:46:06.791552  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/files/etc/ssl/certs/174422.pem --> /etc/ssl/certs/174422.pem (1708 bytes)
	I1123 08:46:06.809068  297115 start.go:296] duration metric: took 149.216822ms for postStartSetup
	I1123 08:46:06.809157  297115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:46:06.809195  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.829536  297115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/newest-cni-399335/id_rsa Username:docker}
	I1123 08:46:06.933880  297115 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:46:06.938359  297115 fix.go:56] duration metric: took 4.476004793s for fixHost
	I1123 08:46:06.938381  297115 start.go:83] releasing machines lock for "newest-cni-399335", held for 4.476053793s
	I1123 08:46:06.938445  297115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-399335
	I1123 08:46:06.957272  297115 ssh_runner.go:195] Run: cat /version.json
	I1123 08:46:06.957329  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.957376  297115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:46:06.957477  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.979733  297115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/newest-cni-399335/id_rsa Username:docker}
	I1123 08:46:06.981876  297115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/newest-cni-399335/id_rsa Username:docker}
	I1123 08:46:07.156878  297115 ssh_runner.go:195] Run: systemctl --version
	I1123 08:46:07.164235  297115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:46:07.169524  297115 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:46:07.169588  297115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:46:07.180131  297115 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 08:46:07.180160  297115 start.go:496] detecting cgroup driver to use...
	I1123 08:46:07.180197  297115 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:46:07.180249  297115 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:46:07.202860  297115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:46:07.219930  297115 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:46:07.219994  297115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:46:07.238447  297115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:46:07.254293  297115 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	b6283e3a15171       56cc512116c8f       8 seconds ago       Running             busybox                   0                   d9ef8a7ac5969       busybox                                                default
	4774054442e89       52546a367cc9e       14 seconds ago      Running             coredns                   0                   c41313d8dc21b       coredns-66bc5c9577-2gcbt                               kube-system
	09d20966a4f14       6e38f40d628db       14 seconds ago      Running             storage-provisioner       0                   6073dfc356881       storage-provisioner                                    kube-system
	6ba4b4be51644       409467f978b4a       25 seconds ago      Running             kindnet-cni               0                   1c4d1b2cb4ca0       kindnet-lxbpk                                          kube-system
	4852c9eb42fa6       fc25172553d79       26 seconds ago      Running             kube-proxy                0                   8867e565e311d       kube-proxy-7ctpr                                       kube-system
	fef96430b5d9d       7dd6aaa1717ab       38 seconds ago      Running             kube-scheduler            0                   f282ca97edc23       kube-scheduler-default-k8s-diff-port-525009            kube-system
	e18f3fb5d67d8       c80c8dbafe7dd       38 seconds ago      Running             kube-controller-manager   0                   044906c7cb6e1       kube-controller-manager-default-k8s-diff-port-525009   kube-system
	363d42cf4fe73       5f1f5298c888d       38 seconds ago      Running             etcd                      0                   f71ac8fe7d9e9       etcd-default-k8s-diff-port-525009                      kube-system
	93d371134d8cd       c3994bc696102       38 seconds ago      Running             kube-apiserver            0                   09d16f7253147       kube-apiserver-default-k8s-diff-port-525009            kube-system
	
	
	==> containerd <==
	Nov 23 08:45:53 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:53.843693474Z" level=info msg="CreateContainer within sandbox \"6073dfc3568819bf1c3fe1eb5a2dae61a2135dc48eb78c0eff4859e9f8f63527\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"09d20966a4f14c03a208eb262fe86a49db6385ee3e7f03c598bf34355292fcbb\""
	Nov 23 08:45:53 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:53.845364859Z" level=info msg="StartContainer for \"09d20966a4f14c03a208eb262fe86a49db6385ee3e7f03c598bf34355292fcbb\""
	Nov 23 08:45:53 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:53.847103160Z" level=info msg="connecting to shim 09d20966a4f14c03a208eb262fe86a49db6385ee3e7f03c598bf34355292fcbb" address="unix:///run/containerd/s/e4a658ec7fe11cac1ecdf2a34d1ebcbbca1ca0d21884617dce4c8785d6c6df63" protocol=ttrpc version=3
	Nov 23 08:45:53 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:53.849363129Z" level=info msg="Container 4774054442e8953d88e07a8355811517ade5c8a5a3312ed26093a06cc02812de: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:45:53 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:53.859204790Z" level=info msg="CreateContainer within sandbox \"c41313d8dc21b68452aa1845287da26ff89fa2766c1bdf5ef3c7e04c5f3cf1d6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4774054442e8953d88e07a8355811517ade5c8a5a3312ed26093a06cc02812de\""
	Nov 23 08:45:53 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:53.860160546Z" level=info msg="StartContainer for \"4774054442e8953d88e07a8355811517ade5c8a5a3312ed26093a06cc02812de\""
	Nov 23 08:45:53 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:53.861489121Z" level=info msg="connecting to shim 4774054442e8953d88e07a8355811517ade5c8a5a3312ed26093a06cc02812de" address="unix:///run/containerd/s/e94c14217f418e4b67411429633a1d9361554afbabba216e1588192901a0b54a" protocol=ttrpc version=3
	Nov 23 08:45:53 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:53.916366596Z" level=info msg="StartContainer for \"09d20966a4f14c03a208eb262fe86a49db6385ee3e7f03c598bf34355292fcbb\" returns successfully"
	Nov 23 08:45:53 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:53.919096592Z" level=info msg="StartContainer for \"4774054442e8953d88e07a8355811517ade5c8a5a3312ed26093a06cc02812de\" returns successfully"
	Nov 23 08:45:57 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:57.464354593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:32796117-bb98-432a-add6-234fb1c63a55,Namespace:default,Attempt:0,}"
	Nov 23 08:45:57 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:57.505607890Z" level=info msg="connecting to shim d9ef8a7ac5969fc85790715a4653de637f334c7df54528f877a67e99d3a765b8" address="unix:///run/containerd/s/7b43b7aca51f60fa1a5caec7276f77a1b93916ecb8fc999956fa019237417300" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:45:57 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:57.588086443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:32796117-bb98-432a-add6-234fb1c63a55,Namespace:default,Attempt:0,} returns sandbox id \"d9ef8a7ac5969fc85790715a4653de637f334c7df54528f877a67e99d3a765b8\""
	Nov 23 08:45:57 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:57.590947480Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.257336191Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.259001304Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396642"
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.260156530Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.262815365Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.264288915Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.673293046s"
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.264349029Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.270443864Z" level=info msg="CreateContainer within sandbox \"d9ef8a7ac5969fc85790715a4653de637f334c7df54528f877a67e99d3a765b8\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.277865417Z" level=info msg="Container b6283e3a15171fd4661f91afeb1ae232a4a3cedd658ea18e9060f8845354c069: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.284839843Z" level=info msg="CreateContainer within sandbox \"d9ef8a7ac5969fc85790715a4653de637f334c7df54528f877a67e99d3a765b8\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"b6283e3a15171fd4661f91afeb1ae232a4a3cedd658ea18e9060f8845354c069\""
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.285480177Z" level=info msg="StartContainer for \"b6283e3a15171fd4661f91afeb1ae232a4a3cedd658ea18e9060f8845354c069\""
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.286810534Z" level=info msg="connecting to shim b6283e3a15171fd4661f91afeb1ae232a4a3cedd658ea18e9060f8845354c069" address="unix:///run/containerd/s/7b43b7aca51f60fa1a5caec7276f77a1b93916ecb8fc999956fa019237417300" protocol=ttrpc version=3
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.345700843Z" level=info msg="StartContainer for \"b6283e3a15171fd4661f91afeb1ae232a4a3cedd658ea18e9060f8845354c069\" returns successfully"
	
	
	==> coredns [4774054442e8953d88e07a8355811517ade5c8a5a3312ed26093a06cc02812de] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40612 - 16684 "HINFO IN 5057302270381051508.4497259876176752764. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.070309215s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-525009
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-525009
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=default-k8s-diff-port-525009
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_45_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:45:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-525009
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:46:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:46:06 +0000   Sun, 23 Nov 2025 08:45:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:46:06 +0000   Sun, 23 Nov 2025 08:45:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:46:06 +0000   Sun, 23 Nov 2025 08:45:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:46:06 +0000   Sun, 23 Nov 2025 08:45:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-525009
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                d9e25572-4377-46e5-9d0d-6e4e67e6d372
	  Boot ID:                    3bab2277-1db4-4284-9fcc-5d1d58e87eb4
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-2gcbt                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-default-k8s-diff-port-525009                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-lxbpk                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-default-k8s-diff-port-525009             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-525009    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-7ctpr                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-default-k8s-diff-port-525009             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  32s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node default-k8s-diff-port-525009 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node default-k8s-diff-port-525009 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node default-k8s-diff-port-525009 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node default-k8s-diff-port-525009 event: Registered Node default-k8s-diff-port-525009 in Controller
	  Normal  NodeReady                15s   kubelet          Node default-k8s-diff-port-525009 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000998] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.395963] i8042: Warning: Keylock active
	[  +0.012075] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497035] block sda: the capability attribute has been deprecated.
	[  +0.088048] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.022581] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.308229] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [363d42cf4fe7388c30cc7709ae186282cd614789de7beed7e6e99c1741fac7d2] <==
	{"level":"warn","ts":"2025-11-23T08:45:31.347383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:31.355254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:31.362554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:31.372514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:31.394361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:31.404110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:31.414194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35572","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T08:45:33.141525Z","caller":"traceutil/trace.go:172","msg":"trace[357463721] linearizableReadLoop","detail":"{readStateIndex:71; appliedIndex:71; }","duration":"141.942706ms","start":"2025-11-23T08:45:32.999550Z","end":"2025-11-23T08:45:33.141493Z","steps":["trace[357463721] 'read index received'  (duration: 141.932956ms)","trace[357463721] 'applied index is now lower than readState.Index'  (duration: 8.252µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:45:33.141726Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"142.171088ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-admin\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T08:45:33.141823Z","caller":"traceutil/trace.go:172","msg":"trace[1837473444] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-admin; range_end:; response_count:0; response_revision:67; }","duration":"142.29202ms","start":"2025-11-23T08:45:32.999519Z","end":"2025-11-23T08:45:33.141811Z","steps":["trace[1837473444] 'agreement among raft nodes before linearized reading'  (duration: 142.057341ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:33.141817Z","caller":"traceutil/trace.go:172","msg":"trace[2099524511] transaction","detail":"{read_only:false; response_revision:68; number_of_response:1; }","duration":"143.908697ms","start":"2025-11-23T08:45:32.997878Z","end":"2025-11-23T08:45:33.141787Z","steps":["trace[2099524511] 'process raft request'  (duration: 143.658219ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:45:33.270928Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.608528ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/edit\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T08:45:33.270997Z","caller":"traceutil/trace.go:172","msg":"trace[700839492] range","detail":"{range_begin:/registry/clusterroles/edit; range_end:; response_count:0; response_revision:68; }","duration":"123.687523ms","start":"2025-11-23T08:45:33.147289Z","end":"2025-11-23T08:45:33.270977Z","steps":["trace[700839492] 'agreement among raft nodes before linearized reading'  (duration: 55.931217ms)","trace[700839492] 'range keys from in-memory index tree'  (duration: 67.642032ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:33.271009Z","caller":"traceutil/trace.go:172","msg":"trace[953933201] transaction","detail":"{read_only:false; response_revision:69; number_of_response:1; }","duration":"124.714767ms","start":"2025-11-23T08:45:33.146278Z","end":"2025-11-23T08:45:33.270993Z","steps":["trace[953933201] 'process raft request'  (duration: 56.952352ms)","trace[953933201] 'compare'  (duration: 67.661392ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:45:33.618934Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.510738ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766361494636771 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:discovery\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:discovery\" value_size:587 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-23T08:45:33.619032Z","caller":"traceutil/trace.go:172","msg":"trace[1534971142] transaction","detail":"{read_only:false; response_revision:71; number_of_response:1; }","duration":"266.153516ms","start":"2025-11-23T08:45:33.352865Z","end":"2025-11-23T08:45:33.619019Z","steps":["trace[1534971142] 'process raft request'  (duration: 122.175575ms)","trace[1534971142] 'compare'  (duration: 143.407784ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:45:34.100805Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"213.653202ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766361494636776 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:basic-user\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:basic-user\" value_size:617 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-23T08:45:34.100936Z","caller":"traceutil/trace.go:172","msg":"trace[1209360061] transaction","detail":"{read_only:false; response_revision:74; number_of_response:1; }","duration":"415.816418ms","start":"2025-11-23T08:45:33.685099Z","end":"2025-11-23T08:45:34.100916Z","steps":["trace[1209360061] 'process raft request'  (duration: 201.774693ms)","trace[1209360061] 'compare'  (duration: 213.497859ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:45:34.101010Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:45:33.685070Z","time spent":"415.899223ms","remote":"127.0.0.1:34790","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":665,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/clusterroles/system:basic-user\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:basic-user\" value_size:617 >> failure:<>"}
	{"level":"warn","ts":"2025-11-23T08:45:34.332421Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.04942ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T08:45:34.332493Z","caller":"traceutil/trace.go:172","msg":"trace[1410638820] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:75; }","duration":"133.140694ms","start":"2025-11-23T08:45:34.199335Z","end":"2025-11-23T08:45:34.332476Z","steps":["trace[1410638820] 'agreement among raft nodes before linearized reading'  (duration: 50.929379ms)","trace[1410638820] 'range keys from in-memory index tree'  (duration: 82.069197ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:34.332496Z","caller":"traceutil/trace.go:172","msg":"trace[726635762] transaction","detail":"{read_only:false; response_revision:76; number_of_response:1; }","duration":"204.814639ms","start":"2025-11-23T08:45:34.127671Z","end":"2025-11-23T08:45:34.332485Z","steps":["trace[726635762] 'process raft request'  (duration: 122.614215ms)","trace[726635762] 'compare'  (duration: 82.073667ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:34.562302Z","caller":"traceutil/trace.go:172","msg":"trace[1605988146] transaction","detail":"{read_only:false; response_revision:79; number_of_response:1; }","duration":"214.497571ms","start":"2025-11-23T08:45:34.347783Z","end":"2025-11-23T08:45:34.562280Z","steps":["trace[1605988146] 'process raft request'  (duration: 129.22222ms)","trace[1605988146] 'compare'  (duration: 85.143646ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:56.960204Z","caller":"traceutil/trace.go:172","msg":"trace[967777746] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"129.675122ms","start":"2025-11-23T08:45:56.830466Z","end":"2025-11-23T08:45:56.960141Z","steps":["trace[967777746] 'process raft request'  (duration: 129.553077ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:57.149538Z","caller":"traceutil/trace.go:172","msg":"trace[642534120] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"125.582415ms","start":"2025-11-23T08:45:57.023931Z","end":"2025-11-23T08:45:57.149514Z","steps":["trace[642534120] 'process raft request'  (duration: 105.632084ms)","trace[642534120] 'compare'  (duration: 19.823896ms)"],"step_count":2}
	
	
	==> kernel <==
	 08:46:08 up  1:28,  0 user,  load average: 4.05, 3.13, 2.08
	Linux default-k8s-diff-port-525009 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6ba4b4be51644873d13828c75634af0b29f18bcd762b670bb61f7bb1d6243bdf] <==
	I1123 08:45:42.968223       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:45:43.062186       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1123 08:45:43.062339       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:45:43.062361       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:45:43.062392       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:45:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:45:43.237432       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:45:43.237492       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:45:43.237505       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:45:43.262328       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:45:43.662112       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:45:43.662171       1 metrics.go:72] Registering metrics
	I1123 08:45:43.662260       1 controller.go:711] "Syncing nftables rules"
	I1123 08:45:53.236910       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 08:45:53.236992       1 main.go:301] handling current node
	I1123 08:46:03.236834       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 08:46:03.236889       1 main.go:301] handling current node
	
	
	==> kube-apiserver [93d371134d8cd68f1d1abbdbbb0e1c610c76d9846c79f93d6a42eb5eae9a4a83] <==
	I1123 08:45:32.104999       1 policy_source.go:240] refreshing policies
	I1123 08:45:32.109023       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:45:32.114635       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:45:32.213770       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:45:32.213894       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:45:32.228795       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:45:32.229270       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:45:33.142985       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:45:33.271970       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:45:33.271991       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:45:35.091534       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:45:35.141306       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:45:35.207310       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:45:35.214898       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1123 08:45:35.216242       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:45:35.221164       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:45:36.013246       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:45:36.287020       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:45:36.296384       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:45:36.306970       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:45:41.820244       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 08:45:41.873222       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:45:41.970181       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:45:41.978189       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1123 08:46:07.095065       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8444->192.168.94.1:60582: use of closed network connection
	
	
	==> kube-controller-manager [e18f3fb5d67d896c3d577be63c9cb8343047e7da54b7279f99d061e515762679] <==
	I1123 08:45:41.011190       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 08:45:41.011077       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 08:45:41.011277       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 08:45:41.011538       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:45:41.011561       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 08:45:41.011778       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:45:41.011872       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 08:45:41.011978       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 08:45:41.011986       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:45:41.012000       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:45:41.012217       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 08:45:41.012240       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:45:41.014897       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:45:41.017212       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:45:41.017214       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 08:45:41.017290       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 08:45:41.017326       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 08:45:41.017338       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 08:45:41.017346       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 08:45:41.023833       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 08:45:41.023865       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-525009" podCIDRs=["10.244.0.0/24"]
	I1123 08:45:41.030830       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:45:41.038141       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 08:45:41.043152       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:45:56.013467       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4852c9eb42fa660c7c6109874aa516cf092f5350c45091e6c797fdd3465dd725] <==
	I1123 08:45:42.668629       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:45:42.734397       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:45:42.835399       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:45:42.835443       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1123 08:45:42.835582       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:45:42.860522       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:45:42.860583       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:45:42.867423       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:45:42.867852       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:45:42.867891       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:45:42.869436       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:45:42.869455       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:45:42.869971       1 config.go:200] "Starting service config controller"
	I1123 08:45:42.870049       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:45:42.870095       1 config.go:309] "Starting node config controller"
	I1123 08:45:42.870134       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:45:42.870165       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:45:42.870202       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:45:42.870210       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:45:42.969700       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:45:42.971096       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:45:42.971107       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [fef96430b5d9df5cbc8184cb5f35d169dcc440b031c8acd15783ac4bcc6107b3] <==
	E1123 08:45:32.085666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:45:32.085720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:45:32.085797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:45:32.085835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:45:32.903353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:45:32.907922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:45:32.921521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:45:32.932021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:45:32.971664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:45:32.985154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:45:33.000310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:45:33.037777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:45:33.096115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:45:33.146393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:45:33.222994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:45:33.255459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:45:33.272913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 08:45:33.296236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:45:33.452255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:45:33.530758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:45:33.629240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:45:33.636711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:45:33.669272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:45:34.673132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1123 08:45:35.772218       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:45:37 default-k8s-diff-port-525009 kubelet[1418]: E1123 08:45:37.189972    1418 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-525009\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-525009"
	Nov 23 08:45:37 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:37.203635    1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-525009" podStartSLOduration=1.203609293 podStartE2EDuration="1.203609293s" podCreationTimestamp="2025-11-23 08:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:37.203496749 +0000 UTC m=+1.153897540" watchObservedRunningTime="2025-11-23 08:45:37.203609293 +0000 UTC m=+1.154010065"
	Nov 23 08:45:37 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:37.225483    1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-525009" podStartSLOduration=4.225451377 podStartE2EDuration="4.225451377s" podCreationTimestamp="2025-11-23 08:45:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:37.21463144 +0000 UTC m=+1.165032229" watchObservedRunningTime="2025-11-23 08:45:37.225451377 +0000 UTC m=+1.175852164"
	Nov 23 08:45:37 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:37.236802    1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-525009" podStartSLOduration=1.23677989 podStartE2EDuration="1.23677989s" podCreationTimestamp="2025-11-23 08:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:37.225748665 +0000 UTC m=+1.176149450" watchObservedRunningTime="2025-11-23 08:45:37.23677989 +0000 UTC m=+1.187180672"
	Nov 23 08:45:37 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:37.237002    1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-525009" podStartSLOduration=1.236992652 podStartE2EDuration="1.236992652s" podCreationTimestamp="2025-11-23 08:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:37.236319518 +0000 UTC m=+1.186720330" watchObservedRunningTime="2025-11-23 08:45:37.236992652 +0000 UTC m=+1.187393437"
	Nov 23 08:45:41 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:41.115860    1418 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:45:41 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:41.116516    1418 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:45:41 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:41.880452    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b3eb58d0-c417-44b6-b3d4-13858fb320d6-cni-cfg\") pod \"kindnet-lxbpk\" (UID: \"b3eb58d0-c417-44b6-b3d4-13858fb320d6\") " pod="kube-system/kindnet-lxbpk"
	Nov 23 08:45:41 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:41.880511    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fh8v\" (UniqueName: \"kubernetes.io/projected/b3eb58d0-c417-44b6-b3d4-13858fb320d6-kube-api-access-4fh8v\") pod \"kindnet-lxbpk\" (UID: \"b3eb58d0-c417-44b6-b3d4-13858fb320d6\") " pod="kube-system/kindnet-lxbpk"
	Nov 23 08:45:41 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:41.880546    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6001077d-b9c4-4cc0-be56-daf8665fd2d8-kube-proxy\") pod \"kube-proxy-7ctpr\" (UID: \"6001077d-b9c4-4cc0-be56-daf8665fd2d8\") " pod="kube-system/kube-proxy-7ctpr"
	Nov 23 08:45:41 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:41.880574    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6001077d-b9c4-4cc0-be56-daf8665fd2d8-xtables-lock\") pod \"kube-proxy-7ctpr\" (UID: \"6001077d-b9c4-4cc0-be56-daf8665fd2d8\") " pod="kube-system/kube-proxy-7ctpr"
	Nov 23 08:45:41 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:41.880599    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfgbx\" (UniqueName: \"kubernetes.io/projected/6001077d-b9c4-4cc0-be56-daf8665fd2d8-kube-api-access-hfgbx\") pod \"kube-proxy-7ctpr\" (UID: \"6001077d-b9c4-4cc0-be56-daf8665fd2d8\") " pod="kube-system/kube-proxy-7ctpr"
	Nov 23 08:45:41 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:41.880621    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3eb58d0-c417-44b6-b3d4-13858fb320d6-xtables-lock\") pod \"kindnet-lxbpk\" (UID: \"b3eb58d0-c417-44b6-b3d4-13858fb320d6\") " pod="kube-system/kindnet-lxbpk"
	Nov 23 08:45:41 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:41.880667    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6001077d-b9c4-4cc0-be56-daf8665fd2d8-lib-modules\") pod \"kube-proxy-7ctpr\" (UID: \"6001077d-b9c4-4cc0-be56-daf8665fd2d8\") " pod="kube-system/kube-proxy-7ctpr"
	Nov 23 08:45:41 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:41.880695    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3eb58d0-c417-44b6-b3d4-13858fb320d6-lib-modules\") pod \"kindnet-lxbpk\" (UID: \"b3eb58d0-c417-44b6-b3d4-13858fb320d6\") " pod="kube-system/kindnet-lxbpk"
	Nov 23 08:45:43 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:43.205050    1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-lxbpk" podStartSLOduration=2.205026828 podStartE2EDuration="2.205026828s" podCreationTimestamp="2025-11-23 08:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:43.204937209 +0000 UTC m=+7.155337996" watchObservedRunningTime="2025-11-23 08:45:43.205026828 +0000 UTC m=+7.155427613"
	Nov 23 08:45:43 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:43.215798    1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7ctpr" podStartSLOduration=2.21577867 podStartE2EDuration="2.21577867s" podCreationTimestamp="2025-11-23 08:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:43.215736419 +0000 UTC m=+7.166137206" watchObservedRunningTime="2025-11-23 08:45:43.21577867 +0000 UTC m=+7.166179456"
	Nov 23 08:45:53 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:53.330407    1418 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:45:53 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:53.461724    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d7b0b879-ccf8-4b50-8333-06358ff1cb0e-tmp\") pod \"storage-provisioner\" (UID: \"d7b0b879-ccf8-4b50-8333-06358ff1cb0e\") " pod="kube-system/storage-provisioner"
	Nov 23 08:45:53 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:53.461781    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4rhz\" (UniqueName: \"kubernetes.io/projected/d7b0b879-ccf8-4b50-8333-06358ff1cb0e-kube-api-access-w4rhz\") pod \"storage-provisioner\" (UID: \"d7b0b879-ccf8-4b50-8333-06358ff1cb0e\") " pod="kube-system/storage-provisioner"
	Nov 23 08:45:53 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:53.461813    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvpkf\" (UniqueName: \"kubernetes.io/projected/366c0bf5-fc19-4019-a4d4-5fe5065c0e8e-kube-api-access-fvpkf\") pod \"coredns-66bc5c9577-2gcbt\" (UID: \"366c0bf5-fc19-4019-a4d4-5fe5065c0e8e\") " pod="kube-system/coredns-66bc5c9577-2gcbt"
	Nov 23 08:45:53 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:53.461841    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/366c0bf5-fc19-4019-a4d4-5fe5065c0e8e-config-volume\") pod \"coredns-66bc5c9577-2gcbt\" (UID: \"366c0bf5-fc19-4019-a4d4-5fe5065c0e8e\") " pod="kube-system/coredns-66bc5c9577-2gcbt"
	Nov 23 08:45:54 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:54.244051    1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2gcbt" podStartSLOduration=12.244027173 podStartE2EDuration="12.244027173s" podCreationTimestamp="2025-11-23 08:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:54.243797176 +0000 UTC m=+18.194197961" watchObservedRunningTime="2025-11-23 08:45:54.244027173 +0000 UTC m=+18.194427960"
	Nov 23 08:45:54 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:54.275057    1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.275033883 podStartE2EDuration="12.275033883s" podCreationTimestamp="2025-11-23 08:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:54.274782652 +0000 UTC m=+18.225183439" watchObservedRunningTime="2025-11-23 08:45:54.275033883 +0000 UTC m=+18.225434669"
	Nov 23 08:45:57 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:57.186730    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mwnr\" (UniqueName: \"kubernetes.io/projected/32796117-bb98-432a-add6-234fb1c63a55-kube-api-access-2mwnr\") pod \"busybox\" (UID: \"32796117-bb98-432a-add6-234fb1c63a55\") " pod="default/busybox"
	
	
	==> storage-provisioner [09d20966a4f14c03a208eb262fe86a49db6385ee3e7f03c598bf34355292fcbb] <==
	I1123 08:45:53.925840       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:45:53.937689       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:45:53.937772       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:45:53.946306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:53.956828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:53.957020       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:45:53.957165       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-525009_2f00e70c-c16e-4929-8048-2d208c7a7368!
	I1123 08:45:53.957233       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"37a993b0-56fe-4e80-9014-ec7b94dcad63", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-525009_2f00e70c-c16e-4929-8048-2d208c7a7368 became leader
	W1123 08:45:53.961229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:53.971524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:54.060843       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-525009_2f00e70c-c16e-4929-8048-2d208c7a7368!
	W1123 08:45:55.974966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:56.040269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:58.045943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:58.050739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:00.053563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:00.059146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:02.062768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:02.068295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:04.071432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:04.075498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:06.079367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:06.084923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:08.089364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:08.096583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-525009 -n default-k8s-diff-port-525009
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-525009 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-525009
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-525009:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c9feb33c3b710bf9f778cdd38f08b9b0f992116d2ccf8e2a2f0b6c0a72ca3dd4",
	        "Created": "2025-11-23T08:45:20.641806263Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 283766,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:45:20.679171009Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/c9feb33c3b710bf9f778cdd38f08b9b0f992116d2ccf8e2a2f0b6c0a72ca3dd4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c9feb33c3b710bf9f778cdd38f08b9b0f992116d2ccf8e2a2f0b6c0a72ca3dd4/hostname",
	        "HostsPath": "/var/lib/docker/containers/c9feb33c3b710bf9f778cdd38f08b9b0f992116d2ccf8e2a2f0b6c0a72ca3dd4/hosts",
	        "LogPath": "/var/lib/docker/containers/c9feb33c3b710bf9f778cdd38f08b9b0f992116d2ccf8e2a2f0b6c0a72ca3dd4/c9feb33c3b710bf9f778cdd38f08b9b0f992116d2ccf8e2a2f0b6c0a72ca3dd4-json.log",
	        "Name": "/default-k8s-diff-port-525009",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-525009:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-525009",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c9feb33c3b710bf9f778cdd38f08b9b0f992116d2ccf8e2a2f0b6c0a72ca3dd4",
	                "LowerDir": "/var/lib/docker/overlay2/896af49d8e5454bea56efcdeacb288ebf1e46fb5df5e36f4c6bfb56731dcf18f-init/diff:/var/lib/docker/overlay2/ee04ca8b85d0dedeb02bd9a5189a59a7f53ca89a011d262a78df32fa43bf0598/diff",
	                "MergedDir": "/var/lib/docker/overlay2/896af49d8e5454bea56efcdeacb288ebf1e46fb5df5e36f4c6bfb56731dcf18f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/896af49d8e5454bea56efcdeacb288ebf1e46fb5df5e36f4c6bfb56731dcf18f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/896af49d8e5454bea56efcdeacb288ebf1e46fb5df5e36f4c6bfb56731dcf18f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-525009",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-525009/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-525009",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-525009",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-525009",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7e3254f93dec94d01254ed99a821592ff1a2b8997cc392fea2a55ec154a91263",
	            "SandboxKey": "/var/run/docker/netns/7e3254f93dec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-525009": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "60ec8d60a8fd3e2002b03ba35b0307bc7ba2f9470e8b1fef7d49e93a9a89f067",
	                    "EndpointID": "c2e90db3ff011874579de78f59f4973814bb31fa386c0b52a53da5cefc6325ad",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "12:fc:2e:80:5c:38",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-525009",
	                        "c9feb33c3b71"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-525009 -n default-k8s-diff-port-525009
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-525009 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-525009 logs -n 25: (1.457810442s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p no-preload-999106 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p kubernetes-upgrade-776670 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-776670    │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ start   │ -p kubernetes-upgrade-776670 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-776670    │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p cert-expiration-680868 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-680868       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p kubernetes-upgrade-776670                                                                                                                                                                                                                        │ kubernetes-upgrade-776670    │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p embed-certs-319770 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-319770           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ delete  │ -p cert-expiration-680868                                                                                                                                                                                                                           │ cert-expiration-680868       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p disable-driver-mounts-445958                                                                                                                                                                                                                     │ disable-driver-mounts-445958 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p default-k8s-diff-port-525009 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-525009 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ old-k8s-version-204346 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-204346       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p old-k8s-version-204346 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-204346       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ unpause │ -p old-k8s-version-204346 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-204346       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p old-k8s-version-204346                                                                                                                                                                                                                           │ old-k8s-version-204346       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p old-k8s-version-204346                                                                                                                                                                                                                           │ old-k8s-version-204346       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p newest-cni-399335 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-399335            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ no-preload-999106 image list --format=json                                                                                                                                                                                                          │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p no-preload-999106 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ unpause │ -p no-preload-999106 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p no-preload-999106                                                                                                                                                                                                                                │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p no-preload-999106                                                                                                                                                                                                                                │ no-preload-999106            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p auto-794429 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-794429                  │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-399335 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-399335            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:46 UTC │
	│ stop    │ -p newest-cni-399335 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-399335            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ addons  │ enable dashboard -p newest-cni-399335 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-399335            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ start   │ -p newest-cni-399335 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-399335            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:46:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:46:02.262862  297115 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:46:02.263457  297115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:46:02.263472  297115 out.go:374] Setting ErrFile to fd 2...
	I1123 08:46:02.263479  297115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:46:02.263959  297115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
	I1123 08:46:02.265014  297115 out.go:368] Setting JSON to false
	I1123 08:46:02.266198  297115 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5303,"bootTime":1763882259,"procs":311,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:46:02.266288  297115 start.go:143] virtualization: kvm guest
	I1123 08:46:02.268238  297115 out.go:179] * [newest-cni-399335] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:46:02.270020  297115 notify.go:221] Checking for updates...
	I1123 08:46:02.270024  297115 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:46:02.271482  297115 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:46:02.272843  297115 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-13876/kubeconfig
	I1123 08:46:02.274014  297115 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-13876/.minikube
	I1123 08:46:02.275227  297115 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:46:02.276361  297115 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:46:02.278076  297115 config.go:182] Loaded profile config "newest-cni-399335": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:46:02.278849  297115 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:46:02.305981  297115 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:46:02.306077  297115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:46:02.369456  297115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-23 08:46:02.357744797 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:46:02.369605  297115 docker.go:319] overlay module found
	I1123 08:46:02.371588  297115 out.go:179] * Using the docker driver based on existing profile
	I1123 08:46:02.372889  297115 start.go:309] selected driver: docker
	I1123 08:46:02.372908  297115 start.go:927] validating driver "docker" against &{Name:newest-cni-399335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-399335 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequeste
d:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:46:02.373024  297115 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:46:02.373690  297115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:46:02.434152  297115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-23 08:46:02.423470428 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:46:02.434445  297115 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 08:46:02.434482  297115 cni.go:84] Creating CNI manager for ""
	I1123 08:46:02.434550  297115 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:46:02.434584  297115 start.go:353] cluster config:
	{Name:newest-cni-399335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-399335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:46:02.437216  297115 out.go:179] * Starting "newest-cni-399335" primary control-plane node in "newest-cni-399335" cluster
	I1123 08:46:02.438363  297115 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:46:02.439542  297115 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:46:02.440662  297115 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:46:02.440696  297115 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:46:02.440705  297115 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1123 08:46:02.440721  297115 cache.go:65] Caching tarball of preloaded images
	I1123 08:46:02.440861  297115 preload.go:238] Found /home/jenkins/minikube-integration/21969-13876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1123 08:46:02.440884  297115 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 08:46:02.440996  297115 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335/config.json ...
	I1123 08:46:02.462167  297115 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:46:02.462192  297115 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:46:02.462213  297115 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:46:02.462248  297115 start.go:360] acquireMachinesLock for newest-cni-399335: {Name:mka68fc1b11056460ac5dd4946687e6696340967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:46:02.462317  297115 start.go:364] duration metric: took 44.173µs to acquireMachinesLock for "newest-cni-399335"
	I1123 08:46:02.462339  297115 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:46:02.462349  297115 fix.go:54] fixHost starting: 
	I1123 08:46:02.462592  297115 cli_runner.go:164] Run: docker container inspect newest-cni-399335 --format={{.State.Status}}
	I1123 08:46:02.480611  297115 fix.go:112] recreateIfNeeded on newest-cni-399335: state=Stopped err=<nil>
	W1123 08:46:02.480640  297115 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 08:46:02.037790  293483 out.go:252]   - Generating certificates and keys ...
	I1123 08:46:02.037896  293483 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:46:02.037981  293483 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:46:02.456059  293483 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:46:02.650760  293483 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:46:02.892889  293483 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:46:03.433697  293483 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:46:03.596148  293483 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:46:03.596284  293483 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-794429 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:46:03.904760  293483 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:46:03.904904  293483 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-794429 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:46:04.138573  293483 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:46:04.371416  293483 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:46:04.533631  293483 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:46:04.533727  293483 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:46:05.059932  293483 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:46:05.296891  293483 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:46:05.532157  293483 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:46:05.911922  293483 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:46:06.189126  293483 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:46:06.190020  293483 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:46:06.206499  293483 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:46:06.209148  293483 out.go:252]   - Booting up control plane ...
	I1123 08:46:06.209257  293483 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:46:06.209349  293483 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:46:06.209433  293483 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:46:06.223747  293483 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:46:06.223880  293483 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:46:06.230267  293483 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:46:06.230625  293483 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:46:06.230707  293483 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:46:06.333353  293483 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:46:06.333489  293483 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:46:02.482405  297115 out.go:252] * Restarting existing docker container for "newest-cni-399335" ...
	I1123 08:46:02.482477  297115 cli_runner.go:164] Run: docker start newest-cni-399335
	I1123 08:46:02.785631  297115 cli_runner.go:164] Run: docker container inspect newest-cni-399335 --format={{.State.Status}}
	I1123 08:46:02.807142  297115 kic.go:430] container "newest-cni-399335" state is running.
	I1123 08:46:02.807612  297115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-399335
	I1123 08:46:02.827013  297115 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335/config.json ...
	I1123 08:46:02.827313  297115 machine.go:94] provisionDockerMachine start ...
	I1123 08:46:02.827393  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:02.848474  297115 main.go:143] libmachine: Using SSH client type: native
	I1123 08:46:02.848851  297115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 08:46:02.848869  297115 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:46:02.849609  297115 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48164->127.0.0.1:33098: read: connection reset by peer
	I1123 08:46:05.993595  297115 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-399335
	
	I1123 08:46:05.993630  297115 ubuntu.go:182] provisioning hostname "newest-cni-399335"
	I1123 08:46:05.993706  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.012745  297115 main.go:143] libmachine: Using SSH client type: native
	I1123 08:46:06.012960  297115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 08:46:06.012974  297115 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-399335 && echo "newest-cni-399335" | sudo tee /etc/hostname
	I1123 08:46:06.167781  297115 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-399335
	
	I1123 08:46:06.167881  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.188339  297115 main.go:143] libmachine: Using SSH client type: native
	I1123 08:46:06.188686  297115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 08:46:06.188719  297115 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-399335' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-399335/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-399335' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:46:06.342749  297115 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:46:06.342777  297115 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-13876/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-13876/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-13876/.minikube}
	I1123 08:46:06.342822  297115 ubuntu.go:190] setting up certificates
	I1123 08:46:06.342839  297115 provision.go:84] configureAuth start
	I1123 08:46:06.342903  297115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-399335
	I1123 08:46:06.364340  297115 provision.go:143] copyHostCerts
	I1123 08:46:06.364416  297115 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-13876/.minikube/ca.pem, removing ...
	I1123 08:46:06.364431  297115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-13876/.minikube/ca.pem
	I1123 08:46:06.364526  297115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-13876/.minikube/ca.pem (1078 bytes)
	I1123 08:46:06.364669  297115 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-13876/.minikube/cert.pem, removing ...
	I1123 08:46:06.364683  297115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-13876/.minikube/cert.pem
	I1123 08:46:06.364724  297115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-13876/.minikube/cert.pem (1123 bytes)
	I1123 08:46:06.364792  297115 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-13876/.minikube/key.pem, removing ...
	I1123 08:46:06.364799  297115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-13876/.minikube/key.pem
	I1123 08:46:06.364823  297115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-13876/.minikube/key.pem (1675 bytes)
	I1123 08:46:06.364877  297115 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-13876/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca-key.pem org=jenkins.newest-cni-399335 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-399335]
	I1123 08:46:06.479812  297115 provision.go:177] copyRemoteCerts
	I1123 08:46:06.479870  297115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:46:06.479911  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.500499  297115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/newest-cni-399335/id_rsa Username:docker}
	I1123 08:46:06.603344  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:46:06.621631  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 08:46:06.640892  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:46:06.659451  297115 provision.go:87] duration metric: took 316.596054ms to configureAuth
	I1123 08:46:06.659481  297115 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:46:06.659806  297115 config.go:182] Loaded profile config "newest-cni-399335": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:46:06.659823  297115 machine.go:97] duration metric: took 3.832490175s to provisionDockerMachine
	I1123 08:46:06.659835  297115 start.go:293] postStartSetup for "newest-cni-399335" (driver="docker")
	I1123 08:46:06.659849  297115 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:46:06.659904  297115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:46:06.659946  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.678221  297115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/newest-cni-399335/id_rsa Username:docker}
	I1123 08:46:06.780370  297115 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:46:06.783936  297115 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:46:06.783965  297115 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:46:06.783976  297115 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-13876/.minikube/addons for local assets ...
	I1123 08:46:06.784034  297115 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-13876/.minikube/files for local assets ...
	I1123 08:46:06.784128  297115 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-13876/.minikube/files/etc/ssl/certs/174422.pem -> 174422.pem in /etc/ssl/certs
	I1123 08:46:06.784237  297115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:46:06.791552  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/files/etc/ssl/certs/174422.pem --> /etc/ssl/certs/174422.pem (1708 bytes)
	I1123 08:46:06.809068  297115 start.go:296] duration metric: took 149.216822ms for postStartSetup
	I1123 08:46:06.809157  297115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:46:06.809195  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.829536  297115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/newest-cni-399335/id_rsa Username:docker}
	I1123 08:46:06.933880  297115 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:46:06.938359  297115 fix.go:56] duration metric: took 4.476004793s for fixHost
	I1123 08:46:06.938381  297115 start.go:83] releasing machines lock for "newest-cni-399335", held for 4.476053793s
	I1123 08:46:06.938445  297115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-399335
	I1123 08:46:06.957272  297115 ssh_runner.go:195] Run: cat /version.json
	I1123 08:46:06.957329  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.957376  297115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:46:06.957477  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:06.979733  297115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/newest-cni-399335/id_rsa Username:docker}
	I1123 08:46:06.981876  297115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/newest-cni-399335/id_rsa Username:docker}
	I1123 08:46:07.156878  297115 ssh_runner.go:195] Run: systemctl --version
	I1123 08:46:07.164235  297115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:46:07.169524  297115 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:46:07.169588  297115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:46:07.180131  297115 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 08:46:07.180160  297115 start.go:496] detecting cgroup driver to use...
	I1123 08:46:07.180197  297115 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:46:07.180249  297115 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:46:07.202860  297115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:46:07.219930  297115 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:46:07.219994  297115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:46:07.238447  297115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:46:07.254293  297115 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:46:07.365439  297115 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:46:07.535083  297115 docker.go:234] disabling docker service ...
	I1123 08:46:07.535146  297115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:46:07.559983  297115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:46:07.579841  297115 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:46:07.725342  297115 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:46:07.874595  297115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:46:07.893359  297115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:46:07.909897  297115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 08:46:07.920311  297115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:46:07.929226  297115 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1123 08:46:07.929301  297115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1123 08:46:07.938295  297115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:46:07.947245  297115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:46:07.956838  297115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:46:07.968734  297115 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:46:07.979030  297115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:46:07.991079  297115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:46:08.003858  297115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:46:08.015755  297115 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:46:08.025531  297115 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:46:08.038175  297115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:46:08.166785  297115 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:46:08.324792  297115 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:46:08.324876  297115 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:46:08.331799  297115 start.go:564] Will wait 60s for crictl version
	I1123 08:46:08.331870  297115 ssh_runner.go:195] Run: which crictl
	I1123 08:46:08.336854  297115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:46:08.373035  297115 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:46:08.373100  297115 ssh_runner.go:195] Run: containerd --version
	I1123 08:46:08.401111  297115 ssh_runner.go:195] Run: containerd --version
	I1123 08:46:08.430798  297115 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 08:46:08.431908  297115 cli_runner.go:164] Run: docker network inspect newest-cni-399335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:46:08.457541  297115 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1123 08:46:08.464173  297115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:46:08.482189  297115 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1123 08:46:08.483606  297115 kubeadm.go:884] updating cluster {Name:newest-cni-399335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-399335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:46:08.483802  297115 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:46:08.483881  297115 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:46:08.526415  297115 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:46:08.526443  297115 containerd.go:534] Images already preloaded, skipping extraction
	I1123 08:46:08.526514  297115 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:46:08.563009  297115 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:46:08.563033  297115 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:46:08.563042  297115 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 containerd true true} ...
	I1123 08:46:08.563169  297115 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-399335 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-399335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:46:08.563225  297115 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:46:08.602145  297115 cni.go:84] Creating CNI manager for ""
	I1123 08:46:08.602169  297115 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:46:08.602186  297115 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 08:46:08.602215  297115 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-399335 NodeName:newest-cni-399335 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:46:08.602376  297115 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-399335"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:46:08.602455  297115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:46:08.612890  297115 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:46:08.612967  297115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:46:08.627813  297115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1123 08:46:08.647557  297115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:46:08.665131  297115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1123 08:46:08.685078  297115 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:46:08.689721  297115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:46:08.703623  297115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:46:08.833755  297115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:46:08.860209  297115 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335 for IP: 192.168.103.2
	I1123 08:46:08.860232  297115 certs.go:195] generating shared ca certs ...
	I1123 08:46:08.860280  297115 certs.go:227] acquiring lock for ca certs: {Name:mk376e2c25eb30d8b09b93cb4624441e819bcc8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:46:08.860530  297115 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-13876/.minikube/ca.key
	I1123 08:46:08.860612  297115 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-13876/.minikube/proxy-client-ca.key
	I1123 08:46:08.860628  297115 certs.go:257] generating profile certs ...
	I1123 08:46:08.860770  297115 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335/client.key
	I1123 08:46:08.860850  297115 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335/apiserver.key.87937944
	I1123 08:46:08.860905  297115 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335/proxy-client.key
	I1123 08:46:08.861044  297115 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/17442.pem (1338 bytes)
	W1123 08:46:08.861086  297115 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-13876/.minikube/certs/17442_empty.pem, impossibly tiny 0 bytes
	I1123 08:46:08.861100  297115 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:46:08.861136  297115 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:46:08.861175  297115 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:46:08.861210  297115 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/certs/key.pem (1675 bytes)
	I1123 08:46:08.861268  297115 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-13876/.minikube/files/etc/ssl/certs/174422.pem (1708 bytes)
	I1123 08:46:08.862249  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:46:08.890883  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:46:08.919744  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:46:08.946210  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:46:08.982294  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 08:46:09.019550  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1123 08:46:09.059602  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:46:09.086103  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/newest-cni-399335/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 08:46:09.114201  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/certs/17442.pem --> /usr/share/ca-certificates/17442.pem (1338 bytes)
	I1123 08:46:09.144268  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/files/etc/ssl/certs/174422.pem --> /usr/share/ca-certificates/174422.pem (1708 bytes)
	I1123 08:46:09.180572  297115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-13876/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:46:09.201581  297115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:46:09.215821  297115 ssh_runner.go:195] Run: openssl version
	I1123 08:46:09.223018  297115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17442.pem && ln -fs /usr/share/ca-certificates/17442.pem /etc/ssl/certs/17442.pem"
	I1123 08:46:09.232209  297115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17442.pem
	I1123 08:46:09.236284  297115 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:16 /usr/share/ca-certificates/17442.pem
	I1123 08:46:09.236355  297115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17442.pem
	I1123 08:46:09.272176  297115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17442.pem /etc/ssl/certs/51391683.0"
	I1123 08:46:09.280935  297115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/174422.pem && ln -fs /usr/share/ca-certificates/174422.pem /etc/ssl/certs/174422.pem"
	I1123 08:46:09.290437  297115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/174422.pem
	I1123 08:46:09.294928  297115 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:16 /usr/share/ca-certificates/174422.pem
	I1123 08:46:09.294987  297115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/174422.pem
	I1123 08:46:09.354146  297115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/174422.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:46:09.367905  297115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:46:09.382225  297115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:46:09.388164  297115 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:11 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:46:09.388250  297115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:46:09.430194  297115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:46:09.442291  297115 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:46:09.449422  297115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 08:46:09.526763  297115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 08:46:09.593988  297115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 08:46:09.703010  297115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 08:46:09.789583  297115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 08:46:09.853622  297115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 08:46:09.922029  297115 kubeadm.go:401] StartCluster: {Name:newest-cni-399335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-399335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:46:09.922157  297115 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:46:09.922350  297115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:46:09.996366  297115 cri.go:89] found id: "9919ebcde05f89d535f303aec52924dfae279c686b44f439e70626b754bd1dc1"
	I1123 08:46:09.996395  297115 cri.go:89] found id: "a8e8d9452f805bd93b8852b535449842da46b76733a6d960c13c7e7fb9904a93"
	I1123 08:46:09.996401  297115 cri.go:89] found id: "5e1b307abc766db40a702fcc79877daca7f25a2002af0227cdf38324e7d61865"
	I1123 08:46:09.996405  297115 cri.go:89] found id: "4967495c75cb11716d274c3d149904d55057b5e34909f5df641ba046cc9d8c2d"
	I1123 08:46:09.996408  297115 cri.go:89] found id: "807b76092a2f3826eb0b1f4ffd905f1558564151bbffb289a091369213ac3d66"
	I1123 08:46:09.996413  297115 cri.go:89] found id: "dd4ff42a202e4cece7872b48e65bf636b9f42a17ea01250502b439814c1772f1"
	I1123 08:46:09.996417  297115 cri.go:89] found id: "b8af8e149bfd1a9f0874f56d7c7812838cab58bce566ae2598bf5e99fb470db7"
	I1123 08:46:09.996421  297115 cri.go:89] found id: "b6d21ff2e246be4d70b8875b3b234adeb3b995e2334aab2dfee053c19daa6839"
	I1123 08:46:09.996425  297115 cri.go:89] found id: "bd5d93c8e80e3ae592e10a66d3b65225e8e2900e70d2c4efc9b0e215a576cd66"
	I1123 08:46:09.996434  297115 cri.go:89] found id: "9c2fa9f9f2c324430e4f3e6743e98eeea5c0938f06bb77f15b26511fabdc4fa0"
	I1123 08:46:09.996443  297115 cri.go:89] found id: ""
	I1123 08:46:09.996496  297115 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1123 08:46:10.044196  297115 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"06d5459b22404691663cda906abec3b4d87a28714bef7d08b59632e2c42ac5d2","pid":857,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/06d5459b22404691663cda906abec3b4d87a28714bef7d08b59632e2c42ac5d2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/06d5459b22404691663cda906abec3b4d87a28714bef7d08b59632e2c42ac5d2/rootfs","created":"2025-11-23T08:46:09.614999348Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"06d5459b22404691663cda906abec3b4d87a28714bef7d08b59632e2c42ac5d2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-399335_e75b5303f5682b75c76eb79dcc14c2e7","io.kubernetes.cri.sand
box-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-399335","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e75b5303f5682b75c76eb79dcc14c2e7"},"owner":"root"},{"ociVersion":"1.2.1","id":"4967495c75cb11716d274c3d149904d55057b5e34909f5df641ba046cc9d8c2d","pid":914,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4967495c75cb11716d274c3d149904d55057b5e34909f5df641ba046cc9d8c2d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4967495c75cb11716d274c3d149904d55057b5e34909f5df641ba046cc9d8c2d/rootfs","created":"2025-11-23T08:46:09.790802621Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"e3e680e09796965ca10b46a848cc41c83e73f2f100a5abb48d6d4cd3858989da","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-399335","io.kubernetes.cri.sandbox-namespace":"kub
e-system","io.kubernetes.cri.sandbox-uid":"64ff81d56135c1526673ad753b396633"},"owner":"root"},{"ociVersion":"1.2.1","id":"5e1b307abc766db40a702fcc79877daca7f25a2002af0227cdf38324e7d61865","pid":959,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e1b307abc766db40a702fcc79877daca7f25a2002af0227cdf38324e7d61865","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e1b307abc766db40a702fcc79877daca7f25a2002af0227cdf38324e7d61865/rootfs","created":"2025-11-23T08:46:09.843865397Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"b4f41edbd6308308032f8e835b34e1082e5f179e8e453f10bc315c82d458a740","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-399335","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"265fd1decd3ec114f8f520dd098e0a26"},"owner":"root"},{"ociVersio
n":"1.2.1","id":"8028ac3e5f6457fd538e96a953038d45a2bc1c1c669eea561083507536fe24cc","pid":849,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8028ac3e5f6457fd538e96a953038d45a2bc1c1c669eea561083507536fe24cc","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8028ac3e5f6457fd538e96a953038d45a2bc1c1c669eea561083507536fe24cc/rootfs","created":"2025-11-23T08:46:09.62777907Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"8028ac3e5f6457fd538e96a953038d45a2bc1c1c669eea561083507536fe24cc","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-399335_e7df3d71c3239606fee540d5b72221e3","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-3993
35","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e7df3d71c3239606fee540d5b72221e3"},"owner":"root"},{"ociVersion":"1.2.1","id":"9919ebcde05f89d535f303aec52924dfae279c686b44f439e70626b754bd1dc1","pid":973,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9919ebcde05f89d535f303aec52924dfae279c686b44f439e70626b754bd1dc1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9919ebcde05f89d535f303aec52924dfae279c686b44f439e70626b754bd1dc1/rootfs","created":"2025-11-23T08:46:09.852274123Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"8028ac3e5f6457fd538e96a953038d45a2bc1c1c669eea561083507536fe24cc","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-399335","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e7df3d71c3239606
fee540d5b72221e3"},"owner":"root"},{"ociVersion":"1.2.1","id":"a8e8d9452f805bd93b8852b535449842da46b76733a6d960c13c7e7fb9904a93","pid":974,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8e8d9452f805bd93b8852b535449842da46b76733a6d960c13c7e7fb9904a93","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8e8d9452f805bd93b8852b535449842da46b76733a6d960c13c7e7fb9904a93/rootfs","created":"2025-11-23T08:46:09.859475259Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"06d5459b22404691663cda906abec3b4d87a28714bef7d08b59632e2c42ac5d2","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-399335","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e75b5303f5682b75c76eb79dcc14c2e7"},"owner":"root"},{"ociVersion":"1.2.1","id":"b4f41edbd630830
8032f8e835b34e1082e5f179e8e453f10bc315c82d458a740","pid":864,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4f41edbd6308308032f8e835b34e1082e5f179e8e453f10bc315c82d458a740","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4f41edbd6308308032f8e835b34e1082e5f179e8e453f10bc315c82d458a740/rootfs","created":"2025-11-23T08:46:09.627673902Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"b4f41edbd6308308032f8e835b34e1082e5f179e8e453f10bc315c82d458a740","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-399335_265fd1decd3ec114f8f520dd098e0a26","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-399335","io.kubernetes.cri.sandbox-
namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"265fd1decd3ec114f8f520dd098e0a26"},"owner":"root"},{"ociVersion":"1.2.1","id":"e3e680e09796965ca10b46a848cc41c83e73f2f100a5abb48d6d4cd3858989da","pid":801,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e3e680e09796965ca10b46a848cc41c83e73f2f100a5abb48d6d4cd3858989da","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e3e680e09796965ca10b46a848cc41c83e73f2f100a5abb48d6d4cd3858989da/rootfs","created":"2025-11-23T08:46:09.578352959Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"e3e680e09796965ca10b46a848cc41c83e73f2f100a5abb48d6d4cd3858989da","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-399335_64ff81d56135c1526673ad753b
396633","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-399335","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"64ff81d56135c1526673ad753b396633"},"owner":"root"}]
	I1123 08:46:10.044520  297115 cri.go:126] list returned 8 containers
	I1123 08:46:10.044625  297115 cri.go:129] container: {ID:06d5459b22404691663cda906abec3b4d87a28714bef7d08b59632e2c42ac5d2 Status:running}
	I1123 08:46:10.044665  297115 cri.go:131] skipping 06d5459b22404691663cda906abec3b4d87a28714bef7d08b59632e2c42ac5d2 - not in ps
	I1123 08:46:10.044672  297115 cri.go:129] container: {ID:4967495c75cb11716d274c3d149904d55057b5e34909f5df641ba046cc9d8c2d Status:running}
	I1123 08:46:10.044681  297115 cri.go:135] skipping {4967495c75cb11716d274c3d149904d55057b5e34909f5df641ba046cc9d8c2d running}: state = "running", want "paused"
	I1123 08:46:10.044691  297115 cri.go:129] container: {ID:5e1b307abc766db40a702fcc79877daca7f25a2002af0227cdf38324e7d61865 Status:running}
	I1123 08:46:10.044698  297115 cri.go:135] skipping {5e1b307abc766db40a702fcc79877daca7f25a2002af0227cdf38324e7d61865 running}: state = "running", want "paused"
	I1123 08:46:10.044704  297115 cri.go:129] container: {ID:8028ac3e5f6457fd538e96a953038d45a2bc1c1c669eea561083507536fe24cc Status:running}
	I1123 08:46:10.044712  297115 cri.go:131] skipping 8028ac3e5f6457fd538e96a953038d45a2bc1c1c669eea561083507536fe24cc - not in ps
	I1123 08:46:10.044718  297115 cri.go:129] container: {ID:9919ebcde05f89d535f303aec52924dfae279c686b44f439e70626b754bd1dc1 Status:running}
	I1123 08:46:10.044727  297115 cri.go:135] skipping {9919ebcde05f89d535f303aec52924dfae279c686b44f439e70626b754bd1dc1 running}: state = "running", want "paused"
	I1123 08:46:10.044734  297115 cri.go:129] container: {ID:a8e8d9452f805bd93b8852b535449842da46b76733a6d960c13c7e7fb9904a93 Status:running}
	I1123 08:46:10.044742  297115 cri.go:135] skipping {a8e8d9452f805bd93b8852b535449842da46b76733a6d960c13c7e7fb9904a93 running}: state = "running", want "paused"
	I1123 08:46:10.044748  297115 cri.go:129] container: {ID:b4f41edbd6308308032f8e835b34e1082e5f179e8e453f10bc315c82d458a740 Status:running}
	I1123 08:46:10.044755  297115 cri.go:131] skipping b4f41edbd6308308032f8e835b34e1082e5f179e8e453f10bc315c82d458a740 - not in ps
	I1123 08:46:10.044760  297115 cri.go:129] container: {ID:e3e680e09796965ca10b46a848cc41c83e73f2f100a5abb48d6d4cd3858989da Status:running}
	I1123 08:46:10.044765  297115 cri.go:131] skipping e3e680e09796965ca10b46a848cc41c83e73f2f100a5abb48d6d4cd3858989da - not in ps
	I1123 08:46:10.044825  297115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:46:10.066819  297115 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 08:46:10.066887  297115 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 08:46:10.067303  297115 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 08:46:10.081936  297115 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:46:10.083561  297115 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-399335" does not appear in /home/jenkins/minikube-integration/21969-13876/kubeconfig
	I1123 08:46:10.084688  297115 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-13876/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-399335" cluster setting kubeconfig missing "newest-cni-399335" context setting]
	I1123 08:46:10.086208  297115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/kubeconfig: {Name:mk636046b7146fd65b5638a6d549b76e61f7f055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:46:10.088485  297115 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 08:46:10.097957  297115 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1123 08:46:10.098060  297115 kubeadm.go:602] duration metric: took 31.165606ms to restartPrimaryControlPlane
	I1123 08:46:10.098071  297115 kubeadm.go:403] duration metric: took 176.052287ms to StartCluster
	I1123 08:46:10.098089  297115 settings.go:142] acquiring lock: {Name:mk2c00a8b461754a49d5c7fd5af34c7d1005153a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:46:10.098161  297115 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-13876/kubeconfig
	I1123 08:46:10.100850  297115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/kubeconfig: {Name:mk636046b7146fd65b5638a6d549b76e61f7f055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:46:10.101198  297115 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:46:10.101394  297115 config.go:182] Loaded profile config "newest-cni-399335": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:46:10.101452  297115 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:46:10.101529  297115 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-399335"
	I1123 08:46:10.101545  297115 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-399335"
	W1123 08:46:10.101551  297115 addons.go:248] addon storage-provisioner should already be in state true
	I1123 08:46:10.101577  297115 host.go:66] Checking if "newest-cni-399335" exists ...
	I1123 08:46:10.102036  297115 addons.go:70] Setting default-storageclass=true in profile "newest-cni-399335"
	I1123 08:46:10.102059  297115 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-399335"
	I1123 08:46:10.102076  297115 addons.go:70] Setting dashboard=true in profile "newest-cni-399335"
	I1123 08:46:10.102101  297115 addons.go:239] Setting addon dashboard=true in "newest-cni-399335"
	W1123 08:46:10.102110  297115 addons.go:248] addon dashboard should already be in state true
	I1123 08:46:10.102151  297115 host.go:66] Checking if "newest-cni-399335" exists ...
	I1123 08:46:10.102354  297115 cli_runner.go:164] Run: docker container inspect newest-cni-399335 --format={{.State.Status}}
	I1123 08:46:10.102614  297115 addons.go:70] Setting metrics-server=true in profile "newest-cni-399335"
	I1123 08:46:10.102637  297115 addons.go:239] Setting addon metrics-server=true in "newest-cni-399335"
	W1123 08:46:10.102909  297115 addons.go:248] addon metrics-server should already be in state true
	I1123 08:46:10.102962  297115 host.go:66] Checking if "newest-cni-399335" exists ...
	I1123 08:46:10.102991  297115 out.go:179] * Verifying Kubernetes components...
	I1123 08:46:10.103252  297115 cli_runner.go:164] Run: docker container inspect newest-cni-399335 --format={{.State.Status}}
	I1123 08:46:10.103434  297115 cli_runner.go:164] Run: docker container inspect newest-cni-399335 --format={{.State.Status}}
	I1123 08:46:10.104177  297115 cli_runner.go:164] Run: docker container inspect newest-cni-399335 --format={{.State.Status}}
	I1123 08:46:10.104379  297115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:46:10.140855  297115 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 08:46:10.143699  297115 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 08:46:10.147683  297115 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 08:46:10.147712  297115 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 08:46:10.147784  297115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-399335
	I1123 08:46:10.155596  297115 addons.go:239] Setting addon default-storageclass=true in "newest-cni-399335"
	W1123 08:46:10.155626  297115 addons.go:248] addon default-storageclass should already be in state true
	I1123 08:46:10.155669  297115 host.go:66] Checking if "newest-cni-399335" exists ...
	I1123 08:46:10.156196  297115 cli_runner.go:164] Run: docker container inspect newest-cni-399335 --format={{.State.Status}}
	I1123 08:46:10.163166  297115 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1123 08:46:10.164251  297115 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	b6283e3a15171       56cc512116c8f       10 seconds ago      Running             busybox                   0                   d9ef8a7ac5969       busybox                                                default
	4774054442e89       52546a367cc9e       17 seconds ago      Running             coredns                   0                   c41313d8dc21b       coredns-66bc5c9577-2gcbt                               kube-system
	09d20966a4f14       6e38f40d628db       17 seconds ago      Running             storage-provisioner       0                   6073dfc356881       storage-provisioner                                    kube-system
	6ba4b4be51644       409467f978b4a       28 seconds ago      Running             kindnet-cni               0                   1c4d1b2cb4ca0       kindnet-lxbpk                                          kube-system
	4852c9eb42fa6       fc25172553d79       28 seconds ago      Running             kube-proxy                0                   8867e565e311d       kube-proxy-7ctpr                                       kube-system
	fef96430b5d9d       7dd6aaa1717ab       41 seconds ago      Running             kube-scheduler            0                   f282ca97edc23       kube-scheduler-default-k8s-diff-port-525009            kube-system
	e18f3fb5d67d8       c80c8dbafe7dd       41 seconds ago      Running             kube-controller-manager   0                   044906c7cb6e1       kube-controller-manager-default-k8s-diff-port-525009   kube-system
	363d42cf4fe73       5f1f5298c888d       41 seconds ago      Running             etcd                      0                   f71ac8fe7d9e9       etcd-default-k8s-diff-port-525009                      kube-system
	93d371134d8cd       c3994bc696102       41 seconds ago      Running             kube-apiserver            0                   09d16f7253147       kube-apiserver-default-k8s-diff-port-525009            kube-system
	
	
	==> containerd <==
	Nov 23 08:45:53 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:53.843693474Z" level=info msg="CreateContainer within sandbox \"6073dfc3568819bf1c3fe1eb5a2dae61a2135dc48eb78c0eff4859e9f8f63527\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"09d20966a4f14c03a208eb262fe86a49db6385ee3e7f03c598bf34355292fcbb\""
	Nov 23 08:45:53 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:53.845364859Z" level=info msg="StartContainer for \"09d20966a4f14c03a208eb262fe86a49db6385ee3e7f03c598bf34355292fcbb\""
	Nov 23 08:45:53 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:53.847103160Z" level=info msg="connecting to shim 09d20966a4f14c03a208eb262fe86a49db6385ee3e7f03c598bf34355292fcbb" address="unix:///run/containerd/s/e4a658ec7fe11cac1ecdf2a34d1ebcbbca1ca0d21884617dce4c8785d6c6df63" protocol=ttrpc version=3
	Nov 23 08:45:53 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:53.849363129Z" level=info msg="Container 4774054442e8953d88e07a8355811517ade5c8a5a3312ed26093a06cc02812de: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:45:53 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:53.859204790Z" level=info msg="CreateContainer within sandbox \"c41313d8dc21b68452aa1845287da26ff89fa2766c1bdf5ef3c7e04c5f3cf1d6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4774054442e8953d88e07a8355811517ade5c8a5a3312ed26093a06cc02812de\""
	Nov 23 08:45:53 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:53.860160546Z" level=info msg="StartContainer for \"4774054442e8953d88e07a8355811517ade5c8a5a3312ed26093a06cc02812de\""
	Nov 23 08:45:53 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:53.861489121Z" level=info msg="connecting to shim 4774054442e8953d88e07a8355811517ade5c8a5a3312ed26093a06cc02812de" address="unix:///run/containerd/s/e94c14217f418e4b67411429633a1d9361554afbabba216e1588192901a0b54a" protocol=ttrpc version=3
	Nov 23 08:45:53 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:53.916366596Z" level=info msg="StartContainer for \"09d20966a4f14c03a208eb262fe86a49db6385ee3e7f03c598bf34355292fcbb\" returns successfully"
	Nov 23 08:45:53 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:53.919096592Z" level=info msg="StartContainer for \"4774054442e8953d88e07a8355811517ade5c8a5a3312ed26093a06cc02812de\" returns successfully"
	Nov 23 08:45:57 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:57.464354593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:32796117-bb98-432a-add6-234fb1c63a55,Namespace:default,Attempt:0,}"
	Nov 23 08:45:57 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:57.505607890Z" level=info msg="connecting to shim d9ef8a7ac5969fc85790715a4653de637f334c7df54528f877a67e99d3a765b8" address="unix:///run/containerd/s/7b43b7aca51f60fa1a5caec7276f77a1b93916ecb8fc999956fa019237417300" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:45:57 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:57.588086443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:32796117-bb98-432a-add6-234fb1c63a55,Namespace:default,Attempt:0,} returns sandbox id \"d9ef8a7ac5969fc85790715a4653de637f334c7df54528f877a67e99d3a765b8\""
	Nov 23 08:45:57 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:45:57.590947480Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.257336191Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.259001304Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396642"
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.260156530Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.262815365Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.264288915Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.673293046s"
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.264349029Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.270443864Z" level=info msg="CreateContainer within sandbox \"d9ef8a7ac5969fc85790715a4653de637f334c7df54528f877a67e99d3a765b8\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.277865417Z" level=info msg="Container b6283e3a15171fd4661f91afeb1ae232a4a3cedd658ea18e9060f8845354c069: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.284839843Z" level=info msg="CreateContainer within sandbox \"d9ef8a7ac5969fc85790715a4653de637f334c7df54528f877a67e99d3a765b8\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"b6283e3a15171fd4661f91afeb1ae232a4a3cedd658ea18e9060f8845354c069\""
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.285480177Z" level=info msg="StartContainer for \"b6283e3a15171fd4661f91afeb1ae232a4a3cedd658ea18e9060f8845354c069\""
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.286810534Z" level=info msg="connecting to shim b6283e3a15171fd4661f91afeb1ae232a4a3cedd658ea18e9060f8845354c069" address="unix:///run/containerd/s/7b43b7aca51f60fa1a5caec7276f77a1b93916ecb8fc999956fa019237417300" protocol=ttrpc version=3
	Nov 23 08:46:00 default-k8s-diff-port-525009 containerd[662]: time="2025-11-23T08:46:00.345700843Z" level=info msg="StartContainer for \"b6283e3a15171fd4661f91afeb1ae232a4a3cedd658ea18e9060f8845354c069\" returns successfully"
	
	
	==> coredns [4774054442e8953d88e07a8355811517ade5c8a5a3312ed26093a06cc02812de] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40612 - 16684 "HINFO IN 5057302270381051508.4497259876176752764. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.070309215s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-525009
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-525009
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=default-k8s-diff-port-525009
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_45_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:45:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-525009
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:46:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:46:06 +0000   Sun, 23 Nov 2025 08:45:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:46:06 +0000   Sun, 23 Nov 2025 08:45:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:46:06 +0000   Sun, 23 Nov 2025 08:45:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:46:06 +0000   Sun, 23 Nov 2025 08:45:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-525009
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                d9e25572-4377-46e5-9d0d-6e4e67e6d372
	  Boot ID:                    3bab2277-1db4-4284-9fcc-5d1d58e87eb4
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  kube-system                 coredns-66bc5c9577-2gcbt                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-default-k8s-diff-port-525009                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-lxbpk                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-default-k8s-diff-port-525009             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-525009    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-7ctpr                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-default-k8s-diff-port-525009             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28s   kube-proxy       
	  Normal  Starting                 35s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  35s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  35s   kubelet          Node default-k8s-diff-port-525009 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s   kubelet          Node default-k8s-diff-port-525009 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s   kubelet          Node default-k8s-diff-port-525009 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s   node-controller  Node default-k8s-diff-port-525009 event: Registered Node default-k8s-diff-port-525009 in Controller
	  Normal  NodeReady                18s   kubelet          Node default-k8s-diff-port-525009 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000998] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.395963] i8042: Warning: Keylock active
	[  +0.012075] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497035] block sda: the capability attribute has been deprecated.
	[  +0.088048] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.022581] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.308229] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [363d42cf4fe7388c30cc7709ae186282cd614789de7beed7e6e99c1741fac7d2] <==
	{"level":"warn","ts":"2025-11-23T08:45:31.347383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:31.355254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:31.362554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:31.372514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:31.394361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:31.404110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:31.414194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35572","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T08:45:33.141525Z","caller":"traceutil/trace.go:172","msg":"trace[357463721] linearizableReadLoop","detail":"{readStateIndex:71; appliedIndex:71; }","duration":"141.942706ms","start":"2025-11-23T08:45:32.999550Z","end":"2025-11-23T08:45:33.141493Z","steps":["trace[357463721] 'read index received'  (duration: 141.932956ms)","trace[357463721] 'applied index is now lower than readState.Index'  (duration: 8.252µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:45:33.141726Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"142.171088ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-admin\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T08:45:33.141823Z","caller":"traceutil/trace.go:172","msg":"trace[1837473444] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-admin; range_end:; response_count:0; response_revision:67; }","duration":"142.29202ms","start":"2025-11-23T08:45:32.999519Z","end":"2025-11-23T08:45:33.141811Z","steps":["trace[1837473444] 'agreement among raft nodes before linearized reading'  (duration: 142.057341ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:33.141817Z","caller":"traceutil/trace.go:172","msg":"trace[2099524511] transaction","detail":"{read_only:false; response_revision:68; number_of_response:1; }","duration":"143.908697ms","start":"2025-11-23T08:45:32.997878Z","end":"2025-11-23T08:45:33.141787Z","steps":["trace[2099524511] 'process raft request'  (duration: 143.658219ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:45:33.270928Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.608528ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/edit\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T08:45:33.270997Z","caller":"traceutil/trace.go:172","msg":"trace[700839492] range","detail":"{range_begin:/registry/clusterroles/edit; range_end:; response_count:0; response_revision:68; }","duration":"123.687523ms","start":"2025-11-23T08:45:33.147289Z","end":"2025-11-23T08:45:33.270977Z","steps":["trace[700839492] 'agreement among raft nodes before linearized reading'  (duration: 55.931217ms)","trace[700839492] 'range keys from in-memory index tree'  (duration: 67.642032ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:33.271009Z","caller":"traceutil/trace.go:172","msg":"trace[953933201] transaction","detail":"{read_only:false; response_revision:69; number_of_response:1; }","duration":"124.714767ms","start":"2025-11-23T08:45:33.146278Z","end":"2025-11-23T08:45:33.270993Z","steps":["trace[953933201] 'process raft request'  (duration: 56.952352ms)","trace[953933201] 'compare'  (duration: 67.661392ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:45:33.618934Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.510738ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766361494636771 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:discovery\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:discovery\" value_size:587 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-23T08:45:33.619032Z","caller":"traceutil/trace.go:172","msg":"trace[1534971142] transaction","detail":"{read_only:false; response_revision:71; number_of_response:1; }","duration":"266.153516ms","start":"2025-11-23T08:45:33.352865Z","end":"2025-11-23T08:45:33.619019Z","steps":["trace[1534971142] 'process raft request'  (duration: 122.175575ms)","trace[1534971142] 'compare'  (duration: 143.407784ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:45:34.100805Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"213.653202ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766361494636776 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:basic-user\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:basic-user\" value_size:617 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-23T08:45:34.100936Z","caller":"traceutil/trace.go:172","msg":"trace[1209360061] transaction","detail":"{read_only:false; response_revision:74; number_of_response:1; }","duration":"415.816418ms","start":"2025-11-23T08:45:33.685099Z","end":"2025-11-23T08:45:34.100916Z","steps":["trace[1209360061] 'process raft request'  (duration: 201.774693ms)","trace[1209360061] 'compare'  (duration: 213.497859ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:45:34.101010Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:45:33.685070Z","time spent":"415.899223ms","remote":"127.0.0.1:34790","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":665,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/clusterroles/system:basic-user\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:basic-user\" value_size:617 >> failure:<>"}
	{"level":"warn","ts":"2025-11-23T08:45:34.332421Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.04942ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T08:45:34.332493Z","caller":"traceutil/trace.go:172","msg":"trace[1410638820] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:75; }","duration":"133.140694ms","start":"2025-11-23T08:45:34.199335Z","end":"2025-11-23T08:45:34.332476Z","steps":["trace[1410638820] 'agreement among raft nodes before linearized reading'  (duration: 50.929379ms)","trace[1410638820] 'range keys from in-memory index tree'  (duration: 82.069197ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:34.332496Z","caller":"traceutil/trace.go:172","msg":"trace[726635762] transaction","detail":"{read_only:false; response_revision:76; number_of_response:1; }","duration":"204.814639ms","start":"2025-11-23T08:45:34.127671Z","end":"2025-11-23T08:45:34.332485Z","steps":["trace[726635762] 'process raft request'  (duration: 122.614215ms)","trace[726635762] 'compare'  (duration: 82.073667ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:34.562302Z","caller":"traceutil/trace.go:172","msg":"trace[1605988146] transaction","detail":"{read_only:false; response_revision:79; number_of_response:1; }","duration":"214.497571ms","start":"2025-11-23T08:45:34.347783Z","end":"2025-11-23T08:45:34.562280Z","steps":["trace[1605988146] 'process raft request'  (duration: 129.22222ms)","trace[1605988146] 'compare'  (duration: 85.143646ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:56.960204Z","caller":"traceutil/trace.go:172","msg":"trace[967777746] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"129.675122ms","start":"2025-11-23T08:45:56.830466Z","end":"2025-11-23T08:45:56.960141Z","steps":["trace[967777746] 'process raft request'  (duration: 129.553077ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:57.149538Z","caller":"traceutil/trace.go:172","msg":"trace[642534120] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"125.582415ms","start":"2025-11-23T08:45:57.023931Z","end":"2025-11-23T08:45:57.149514Z","steps":["trace[642534120] 'process raft request'  (duration: 105.632084ms)","trace[642534120] 'compare'  (duration: 19.823896ms)"],"step_count":2}
	
	
	==> kernel <==
	 08:46:11 up  1:28,  0 user,  load average: 4.85, 3.31, 2.15
	Linux default-k8s-diff-port-525009 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6ba4b4be51644873d13828c75634af0b29f18bcd762b670bb61f7bb1d6243bdf] <==
	I1123 08:45:42.968223       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:45:43.062186       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1123 08:45:43.062339       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:45:43.062361       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:45:43.062392       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:45:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:45:43.237432       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:45:43.237492       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:45:43.237505       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:45:43.262328       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:45:43.662112       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:45:43.662171       1 metrics.go:72] Registering metrics
	I1123 08:45:43.662260       1 controller.go:711] "Syncing nftables rules"
	I1123 08:45:53.236910       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 08:45:53.236992       1 main.go:301] handling current node
	I1123 08:46:03.236834       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 08:46:03.236889       1 main.go:301] handling current node
	
	
	==> kube-apiserver [93d371134d8cd68f1d1abbdbbb0e1c610c76d9846c79f93d6a42eb5eae9a4a83] <==
	I1123 08:45:32.104999       1 policy_source.go:240] refreshing policies
	I1123 08:45:32.109023       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:45:32.114635       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:45:32.213770       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:45:32.213894       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:45:32.228795       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:45:32.229270       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:45:33.142985       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:45:33.271970       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:45:33.271991       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:45:35.091534       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:45:35.141306       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:45:35.207310       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:45:35.214898       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1123 08:45:35.216242       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:45:35.221164       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:45:36.013246       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:45:36.287020       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:45:36.296384       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:45:36.306970       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:45:41.820244       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 08:45:41.873222       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:45:41.970181       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:45:41.978189       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1123 08:46:07.095065       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8444->192.168.94.1:60582: use of closed network connection
	
	
	==> kube-controller-manager [e18f3fb5d67d896c3d577be63c9cb8343047e7da54b7279f99d061e515762679] <==
	I1123 08:45:41.011190       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 08:45:41.011077       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 08:45:41.011277       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 08:45:41.011538       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:45:41.011561       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 08:45:41.011778       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:45:41.011872       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 08:45:41.011978       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 08:45:41.011986       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:45:41.012000       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:45:41.012217       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 08:45:41.012240       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:45:41.014897       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:45:41.017212       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:45:41.017214       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 08:45:41.017290       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 08:45:41.017326       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 08:45:41.017338       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 08:45:41.017346       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 08:45:41.023833       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 08:45:41.023865       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-525009" podCIDRs=["10.244.0.0/24"]
	I1123 08:45:41.030830       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:45:41.038141       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 08:45:41.043152       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:45:56.013467       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4852c9eb42fa660c7c6109874aa516cf092f5350c45091e6c797fdd3465dd725] <==
	I1123 08:45:42.668629       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:45:42.734397       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:45:42.835399       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:45:42.835443       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1123 08:45:42.835582       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:45:42.860522       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:45:42.860583       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:45:42.867423       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:45:42.867852       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:45:42.867891       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:45:42.869436       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:45:42.869455       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:45:42.869971       1 config.go:200] "Starting service config controller"
	I1123 08:45:42.870049       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:45:42.870095       1 config.go:309] "Starting node config controller"
	I1123 08:45:42.870134       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:45:42.870165       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:45:42.870202       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:45:42.870210       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:45:42.969700       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:45:42.971096       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:45:42.971107       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [fef96430b5d9df5cbc8184cb5f35d169dcc440b031c8acd15783ac4bcc6107b3] <==
	E1123 08:45:32.085666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:45:32.085720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:45:32.085797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:45:32.085835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:45:32.903353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:45:32.907922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:45:32.921521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:45:32.932021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:45:32.971664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:45:32.985154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:45:33.000310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:45:33.037777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:45:33.096115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:45:33.146393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:45:33.222994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:45:33.255459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:45:33.272913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 08:45:33.296236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:45:33.452255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:45:33.530758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:45:33.629240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:45:33.636711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:45:33.669272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:45:34.673132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1123 08:45:35.772218       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:45:37 default-k8s-diff-port-525009 kubelet[1418]: E1123 08:45:37.189972    1418 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-525009\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-525009"
	Nov 23 08:45:37 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:37.203635    1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-525009" podStartSLOduration=1.203609293 podStartE2EDuration="1.203609293s" podCreationTimestamp="2025-11-23 08:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:37.203496749 +0000 UTC m=+1.153897540" watchObservedRunningTime="2025-11-23 08:45:37.203609293 +0000 UTC m=+1.154010065"
	Nov 23 08:45:37 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:37.225483    1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-525009" podStartSLOduration=4.225451377 podStartE2EDuration="4.225451377s" podCreationTimestamp="2025-11-23 08:45:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:37.21463144 +0000 UTC m=+1.165032229" watchObservedRunningTime="2025-11-23 08:45:37.225451377 +0000 UTC m=+1.175852164"
	Nov 23 08:45:37 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:37.236802    1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-525009" podStartSLOduration=1.23677989 podStartE2EDuration="1.23677989s" podCreationTimestamp="2025-11-23 08:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:37.225748665 +0000 UTC m=+1.176149450" watchObservedRunningTime="2025-11-23 08:45:37.23677989 +0000 UTC m=+1.187180672"
	Nov 23 08:45:37 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:37.237002    1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-525009" podStartSLOduration=1.236992652 podStartE2EDuration="1.236992652s" podCreationTimestamp="2025-11-23 08:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:37.236319518 +0000 UTC m=+1.186720330" watchObservedRunningTime="2025-11-23 08:45:37.236992652 +0000 UTC m=+1.187393437"
	Nov 23 08:45:41 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:41.115860    1418 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:45:41 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:41.116516    1418 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:45:41 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:41.880452    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b3eb58d0-c417-44b6-b3d4-13858fb320d6-cni-cfg\") pod \"kindnet-lxbpk\" (UID: \"b3eb58d0-c417-44b6-b3d4-13858fb320d6\") " pod="kube-system/kindnet-lxbpk"
	Nov 23 08:45:41 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:41.880511    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fh8v\" (UniqueName: \"kubernetes.io/projected/b3eb58d0-c417-44b6-b3d4-13858fb320d6-kube-api-access-4fh8v\") pod \"kindnet-lxbpk\" (UID: \"b3eb58d0-c417-44b6-b3d4-13858fb320d6\") " pod="kube-system/kindnet-lxbpk"
	Nov 23 08:45:41 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:41.880546    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6001077d-b9c4-4cc0-be56-daf8665fd2d8-kube-proxy\") pod \"kube-proxy-7ctpr\" (UID: \"6001077d-b9c4-4cc0-be56-daf8665fd2d8\") " pod="kube-system/kube-proxy-7ctpr"
	Nov 23 08:45:41 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:41.880574    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6001077d-b9c4-4cc0-be56-daf8665fd2d8-xtables-lock\") pod \"kube-proxy-7ctpr\" (UID: \"6001077d-b9c4-4cc0-be56-daf8665fd2d8\") " pod="kube-system/kube-proxy-7ctpr"
	Nov 23 08:45:41 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:41.880599    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfgbx\" (UniqueName: \"kubernetes.io/projected/6001077d-b9c4-4cc0-be56-daf8665fd2d8-kube-api-access-hfgbx\") pod \"kube-proxy-7ctpr\" (UID: \"6001077d-b9c4-4cc0-be56-daf8665fd2d8\") " pod="kube-system/kube-proxy-7ctpr"
	Nov 23 08:45:41 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:41.880621    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3eb58d0-c417-44b6-b3d4-13858fb320d6-xtables-lock\") pod \"kindnet-lxbpk\" (UID: \"b3eb58d0-c417-44b6-b3d4-13858fb320d6\") " pod="kube-system/kindnet-lxbpk"
	Nov 23 08:45:41 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:41.880667    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6001077d-b9c4-4cc0-be56-daf8665fd2d8-lib-modules\") pod \"kube-proxy-7ctpr\" (UID: \"6001077d-b9c4-4cc0-be56-daf8665fd2d8\") " pod="kube-system/kube-proxy-7ctpr"
	Nov 23 08:45:41 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:41.880695    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3eb58d0-c417-44b6-b3d4-13858fb320d6-lib-modules\") pod \"kindnet-lxbpk\" (UID: \"b3eb58d0-c417-44b6-b3d4-13858fb320d6\") " pod="kube-system/kindnet-lxbpk"
	Nov 23 08:45:43 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:43.205050    1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-lxbpk" podStartSLOduration=2.205026828 podStartE2EDuration="2.205026828s" podCreationTimestamp="2025-11-23 08:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:43.204937209 +0000 UTC m=+7.155337996" watchObservedRunningTime="2025-11-23 08:45:43.205026828 +0000 UTC m=+7.155427613"
	Nov 23 08:45:43 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:43.215798    1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7ctpr" podStartSLOduration=2.21577867 podStartE2EDuration="2.21577867s" podCreationTimestamp="2025-11-23 08:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:43.215736419 +0000 UTC m=+7.166137206" watchObservedRunningTime="2025-11-23 08:45:43.21577867 +0000 UTC m=+7.166179456"
	Nov 23 08:45:53 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:53.330407    1418 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:45:53 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:53.461724    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d7b0b879-ccf8-4b50-8333-06358ff1cb0e-tmp\") pod \"storage-provisioner\" (UID: \"d7b0b879-ccf8-4b50-8333-06358ff1cb0e\") " pod="kube-system/storage-provisioner"
	Nov 23 08:45:53 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:53.461781    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4rhz\" (UniqueName: \"kubernetes.io/projected/d7b0b879-ccf8-4b50-8333-06358ff1cb0e-kube-api-access-w4rhz\") pod \"storage-provisioner\" (UID: \"d7b0b879-ccf8-4b50-8333-06358ff1cb0e\") " pod="kube-system/storage-provisioner"
	Nov 23 08:45:53 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:53.461813    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvpkf\" (UniqueName: \"kubernetes.io/projected/366c0bf5-fc19-4019-a4d4-5fe5065c0e8e-kube-api-access-fvpkf\") pod \"coredns-66bc5c9577-2gcbt\" (UID: \"366c0bf5-fc19-4019-a4d4-5fe5065c0e8e\") " pod="kube-system/coredns-66bc5c9577-2gcbt"
	Nov 23 08:45:53 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:53.461841    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/366c0bf5-fc19-4019-a4d4-5fe5065c0e8e-config-volume\") pod \"coredns-66bc5c9577-2gcbt\" (UID: \"366c0bf5-fc19-4019-a4d4-5fe5065c0e8e\") " pod="kube-system/coredns-66bc5c9577-2gcbt"
	Nov 23 08:45:54 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:54.244051    1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2gcbt" podStartSLOduration=12.244027173 podStartE2EDuration="12.244027173s" podCreationTimestamp="2025-11-23 08:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:54.243797176 +0000 UTC m=+18.194197961" watchObservedRunningTime="2025-11-23 08:45:54.244027173 +0000 UTC m=+18.194427960"
	Nov 23 08:45:54 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:54.275057    1418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.275033883 podStartE2EDuration="12.275033883s" podCreationTimestamp="2025-11-23 08:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:54.274782652 +0000 UTC m=+18.225183439" watchObservedRunningTime="2025-11-23 08:45:54.275033883 +0000 UTC m=+18.225434669"
	Nov 23 08:45:57 default-k8s-diff-port-525009 kubelet[1418]: I1123 08:45:57.186730    1418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mwnr\" (UniqueName: \"kubernetes.io/projected/32796117-bb98-432a-add6-234fb1c63a55-kube-api-access-2mwnr\") pod \"busybox\" (UID: \"32796117-bb98-432a-add6-234fb1c63a55\") " pod="default/busybox"
	
	
	==> storage-provisioner [09d20966a4f14c03a208eb262fe86a49db6385ee3e7f03c598bf34355292fcbb] <==
	I1123 08:45:53.937772       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:45:53.946306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:53.956828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:53.957020       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:45:53.957165       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-525009_2f00e70c-c16e-4929-8048-2d208c7a7368!
	I1123 08:45:53.957233       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"37a993b0-56fe-4e80-9014-ec7b94dcad63", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-525009_2f00e70c-c16e-4929-8048-2d208c7a7368 became leader
	W1123 08:45:53.961229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:53.971524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:54.060843       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-525009_2f00e70c-c16e-4929-8048-2d208c7a7368!
	W1123 08:45:55.974966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:56.040269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:58.045943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:58.050739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:00.053563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:00.059146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:02.062768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:02.068295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:04.071432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:04.075498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:06.079367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:06.084923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:08.089364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:08.096583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:10.100580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:46:10.147153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-525009 -n default-k8s-diff-port-525009
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-525009 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (15.90s)

                                                
                                    

Test pass (303/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 17.13
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 10.97
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
20 TestDownloadOnlyKic 0.4
21 TestBinaryMirror 0.81
22 TestOffline 69.27
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 124.6
29 TestAddons/serial/Volcano 40.06
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 10.46
35 TestAddons/parallel/Registry 14.84
36 TestAddons/parallel/RegistryCreds 0.67
37 TestAddons/parallel/Ingress 21.47
38 TestAddons/parallel/InspektorGadget 10.71
39 TestAddons/parallel/MetricsServer 5.68
41 TestAddons/parallel/CSI 52.63
42 TestAddons/parallel/Headlamp 18.54
43 TestAddons/parallel/CloudSpanner 6.51
44 TestAddons/parallel/LocalPath 12.22
45 TestAddons/parallel/NvidiaDevicePlugin 5.6
46 TestAddons/parallel/Yakd 10.65
47 TestAddons/parallel/AmdGpuDevicePlugin 5.51
48 TestAddons/StoppedEnableDisable 12.62
49 TestCertOptions 30.29
50 TestCertExpiration 213.84
52 TestForceSystemdFlag 27.23
53 TestForceSystemdEnv 26.58
54 TestDockerEnvContainerd 35.35
58 TestErrorSpam/setup 21.51
59 TestErrorSpam/start 0.66
60 TestErrorSpam/status 0.97
61 TestErrorSpam/pause 1.44
62 TestErrorSpam/unpause 1.51
63 TestErrorSpam/stop 2.18
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 38.24
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.89
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.97
75 TestFunctional/serial/CacheCmd/cache/add_local 1.91
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.57
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 48.98
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.21
86 TestFunctional/serial/LogsFileCmd 1.25
87 TestFunctional/serial/InvalidService 4.02
89 TestFunctional/parallel/ConfigCmd 0.46
90 TestFunctional/parallel/DashboardCmd 7.69
91 TestFunctional/parallel/DryRun 0.39
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 1.04
97 TestFunctional/parallel/ServiceCmdConnect 10.68
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 37.35
101 TestFunctional/parallel/SSHCmd 0.55
102 TestFunctional/parallel/CpCmd 1.87
103 TestFunctional/parallel/MySQL 21.85
104 TestFunctional/parallel/FileSync 0.31
105 TestFunctional/parallel/CertSync 1.88
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
113 TestFunctional/parallel/License 0.4
114 TestFunctional/parallel/ServiceCmd/DeployApp 8.2
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
118 TestFunctional/parallel/Version/short 0.06
119 TestFunctional/parallel/Version/components 0.54
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.41
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 19.19
125 TestFunctional/parallel/ServiceCmd/List 0.53
126 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
127 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
128 TestFunctional/parallel/ServiceCmd/Format 0.39
129 TestFunctional/parallel/ServiceCmd/URL 0.46
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
131 TestFunctional/parallel/ProfileCmd/profile_list 0.49
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
133 TestFunctional/parallel/MountCmd/any-port 13
134 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
139 TestFunctional/parallel/ImageCommands/ImageBuild 4
140 TestFunctional/parallel/ImageCommands/Setup 1.77
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.1
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.12
148 TestFunctional/parallel/MountCmd/specific-port 1.76
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.16
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.77
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.61
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 131.63
163 TestMultiControlPlane/serial/DeployApp 5.9
164 TestMultiControlPlane/serial/PingHostFromPods 1.17
165 TestMultiControlPlane/serial/AddWorkerNode 23.99
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.91
168 TestMultiControlPlane/serial/CopyFile 17.48
169 TestMultiControlPlane/serial/StopSecondaryNode 12.76
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.75
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.73
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.91
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 92.58
174 TestMultiControlPlane/serial/DeleteSecondaryNode 9.47
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.72
176 TestMultiControlPlane/serial/StopCluster 36.18
177 TestMultiControlPlane/serial/RestartCluster 57.19
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.72
179 TestMultiControlPlane/serial/AddSecondaryNode 35.97
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.91
185 TestJSONOutput/start/Command 38.61
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.71
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.6
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.85
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 33.67
211 TestKicCustomNetwork/use_default_bridge_network 22.39
212 TestKicExistingNetwork 24.39
213 TestKicCustomSubnet 25.88
214 TestKicStaticIP 29.05
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 52.36
219 TestMountStart/serial/StartWithMountFirst 7.33
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 7.46
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.66
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.26
226 TestMountStart/serial/RestartStopped 7.35
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 64.48
231 TestMultiNode/serial/DeployApp2Nodes 4.79
232 TestMultiNode/serial/PingHostFrom2Pods 0.79
233 TestMultiNode/serial/AddNode 23.28
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.67
236 TestMultiNode/serial/CopyFile 9.93
237 TestMultiNode/serial/StopNode 2.29
238 TestMultiNode/serial/StartAfterStop 6.87
239 TestMultiNode/serial/RestartKeepsNodes 76.05
240 TestMultiNode/serial/DeleteNode 5.3
241 TestMultiNode/serial/StopMultiNode 24.06
242 TestMultiNode/serial/RestartMultiNode 49.75
243 TestMultiNode/serial/ValidateNameConflict 22.79
248 TestPreload 108.87
250 TestScheduledStopUnix 99.31
253 TestInsufficientStorage 12.1
254 TestRunningBinaryUpgrade 92.58
256 TestKubernetesUpgrade 339.07
257 TestMissingContainerUpgrade 83.3
266 TestPause/serial/Start 83.49
268 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
269 TestNoKubernetes/serial/StartWithK8s 20.3
270 TestPause/serial/SecondStartNoReconfiguration 6.3
271 TestNoKubernetes/serial/StartWithStopK8s 22.32
272 TestPause/serial/Pause 0.82
273 TestPause/serial/VerifyStatus 0.36
274 TestPause/serial/Unpause 0.69
275 TestPause/serial/PauseAgain 0.68
276 TestPause/serial/DeletePaused 2.78
280 TestPause/serial/VerifyDeletedResources 15.59
285 TestNetworkPlugins/group/false 3.4
289 TestNoKubernetes/serial/Start 9.21
290 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
291 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
292 TestNoKubernetes/serial/ProfileList 19.87
293 TestNoKubernetes/serial/Stop 2.18
294 TestNoKubernetes/serial/StartNoArgs 7
295 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
296 TestStoppedBinaryUpgrade/Setup 2.59
297 TestStoppedBinaryUpgrade/Upgrade 42.49
298 TestStoppedBinaryUpgrade/MinikubeLogs 1.2
300 TestStartStop/group/old-k8s-version/serial/FirstStart 51.61
302 TestStartStop/group/no-preload/serial/FirstStart 52.01
304 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.87
305 TestStartStop/group/old-k8s-version/serial/Stop 12.09
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
308 TestStartStop/group/old-k8s-version/serial/SecondStart 44.25
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.85
310 TestStartStop/group/no-preload/serial/Stop 12.71
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
312 TestStartStop/group/no-preload/serial/SecondStart 49.53
314 TestStartStop/group/embed-certs/serial/FirstStart 45.84
315 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
317 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 41.35
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
319 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
320 TestStartStop/group/old-k8s-version/serial/Pause 2.99
322 TestStartStop/group/newest-cni/serial/FirstStart 31.11
323 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
324 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
325 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
326 TestStartStop/group/no-preload/serial/Pause 2.97
327 TestNetworkPlugins/group/auto/Start 70.77
330 TestStartStop/group/newest-cni/serial/DeployApp 0
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.87
332 TestStartStop/group/newest-cni/serial/Stop 1.43
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
334 TestStartStop/group/newest-cni/serial/SecondStart 13.32
335 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.22
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.37
337 TestStartStop/group/embed-certs/serial/Stop 13.16
338 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.76
339 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
342 TestStartStop/group/newest-cni/serial/Pause 2.81
343 TestNetworkPlugins/group/kindnet/Start 44.68
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
345 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 46.27
346 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.28
347 TestStartStop/group/embed-certs/serial/SecondStart 51.42
348 TestNetworkPlugins/group/auto/KubeletFlags 0.34
349 TestNetworkPlugins/group/auto/NetCatPod 9.19
350 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
351 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
352 TestNetworkPlugins/group/kindnet/NetCatPod 9.18
353 TestNetworkPlugins/group/auto/DNS 0.13
354 TestNetworkPlugins/group/auto/Localhost 0.13
355 TestNetworkPlugins/group/auto/HairPin 0.12
356 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
357 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
358 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
359 TestNetworkPlugins/group/kindnet/DNS 0.15
360 TestNetworkPlugins/group/kindnet/Localhost 0.12
361 TestNetworkPlugins/group/kindnet/HairPin 0.12
362 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
363 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.97
364 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
365 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
366 TestStartStop/group/embed-certs/serial/Pause 3.58
367 TestNetworkPlugins/group/calico/Start 52.95
368 TestNetworkPlugins/group/custom-flannel/Start 51.74
369 TestNetworkPlugins/group/enable-default-cni/Start 40.05
370 TestNetworkPlugins/group/flannel/Start 59.31
371 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
372 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.31
373 TestNetworkPlugins/group/calico/ControllerPod 6.01
374 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
375 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
376 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
377 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
378 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.18
379 TestNetworkPlugins/group/calico/KubeletFlags 0.32
380 TestNetworkPlugins/group/calico/NetCatPod 8.21
381 TestNetworkPlugins/group/custom-flannel/DNS 0.14
382 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
383 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
384 TestNetworkPlugins/group/calico/DNS 0.14
385 TestNetworkPlugins/group/calico/Localhost 0.12
386 TestNetworkPlugins/group/calico/HairPin 0.13
387 TestNetworkPlugins/group/flannel/ControllerPod 6.01
388 TestNetworkPlugins/group/bridge/Start 63.1
389 TestNetworkPlugins/group/flannel/KubeletFlags 0.5
390 TestNetworkPlugins/group/flannel/NetCatPod 10.04
391 TestNetworkPlugins/group/flannel/DNS 0.16
392 TestNetworkPlugins/group/flannel/Localhost 0.13
393 TestNetworkPlugins/group/flannel/HairPin 0.11
394 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
395 TestNetworkPlugins/group/bridge/NetCatPod 9.17
396 TestNetworkPlugins/group/bridge/DNS 0.12
397 TestNetworkPlugins/group/bridge/Localhost 0.1
398 TestNetworkPlugins/group/bridge/HairPin 0.1
x
+
TestDownloadOnly/v1.28.0/json-events (17.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-069384 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-069384 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (17.132969655s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (17.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1123 08:10:56.904952   17442 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1123 08:10:56.905186   17442 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-069384
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-069384: exit status 85 (76.183267ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-069384 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-069384 │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:10:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:10:39.824307   17454 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:10:39.824553   17454 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:10:39.824564   17454 out.go:374] Setting ErrFile to fd 2...
	I1123 08:10:39.824570   17454 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:10:39.824809   17454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
	W1123 08:10:39.824938   17454 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21969-13876/.minikube/config/config.json: open /home/jenkins/minikube-integration/21969-13876/.minikube/config/config.json: no such file or directory
	I1123 08:10:39.825420   17454 out.go:368] Setting JSON to true
	I1123 08:10:39.826294   17454 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3181,"bootTime":1763882259,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:10:39.826352   17454 start.go:143] virtualization: kvm guest
	I1123 08:10:39.830926   17454 out.go:99] [download-only-069384] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1123 08:10:39.831078   17454 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21969-13876/.minikube/cache/preloaded-tarball: no such file or directory
	I1123 08:10:39.831107   17454 notify.go:221] Checking for updates...
	I1123 08:10:39.832370   17454 out.go:171] MINIKUBE_LOCATION=21969
	I1123 08:10:39.833928   17454 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:10:39.835184   17454 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21969-13876/kubeconfig
	I1123 08:10:39.836631   17454 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-13876/.minikube
	I1123 08:10:39.837842   17454 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1123 08:10:39.840004   17454 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 08:10:39.840251   17454 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:10:39.864638   17454 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:10:39.864732   17454 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:10:40.248656   17454 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-23 08:10:40.237696092 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:10:40.248813   17454 docker.go:319] overlay module found
	I1123 08:10:40.250507   17454 out.go:99] Using the docker driver based on user configuration
	I1123 08:10:40.250544   17454 start.go:309] selected driver: docker
	I1123 08:10:40.250551   17454 start.go:927] validating driver "docker" against <nil>
	I1123 08:10:40.250662   17454 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:10:40.313868   17454 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-23 08:10:40.30451585 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:10:40.314084   17454 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:10:40.314613   17454 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1123 08:10:40.314824   17454 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 08:10:40.316689   17454 out.go:171] Using Docker driver with root privileges
	I1123 08:10:40.318146   17454 cni.go:84] Creating CNI manager for ""
	I1123 08:10:40.318208   17454 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:10:40.318222   17454 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:10:40.318286   17454 start.go:353] cluster config:
	{Name:download-only-069384 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-069384 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:10:40.319535   17454 out.go:99] Starting "download-only-069384" primary control-plane node in "download-only-069384" cluster
	I1123 08:10:40.319563   17454 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:10:40.320624   17454 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:10:40.320679   17454 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:10:40.320754   17454 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:10:40.337538   17454 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 08:10:40.337780   17454 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 08:10:40.337888   17454 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 08:10:40.413453   17454 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1123 08:10:40.413496   17454 cache.go:65] Caching tarball of preloaded images
	I1123 08:10:40.413705   17454 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:10:40.415739   17454 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1123 08:10:40.415758   17454 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1123 08:10:40.675530   17454 preload.go:295] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1123 08:10:40.675675   17454 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1123 08:10:51.200932   17454 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1123 08:10:51.201297   17454 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/download-only-069384/config.json ...
	I1123 08:10:51.201336   17454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/download-only-069384/config.json: {Name:mkf67fdab47b8f3e0af37f2b39b992ac4f0e883f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:10:51.201517   17454 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:10:51.201760   17454 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-069384 host does not exist
	  To start a cluster, run: "minikube start -p download-only-069384"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-069384
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (10.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-456667 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-456667 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (10.97241901s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (10.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1123 08:11:08.323253   17442 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1123 08:11:08.323293   17442 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-456667
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-456667: exit status 85 (73.603814ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-069384 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-069384 │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │ 23 Nov 25 08:10 UTC │
	│ delete  │ -p download-only-069384                                                                                                                                                               │ download-only-069384 │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │ 23 Nov 25 08:10 UTC │
	│ start   │ -o=json --download-only -p download-only-456667 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-456667 │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:10:57
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:10:57.400932   17860 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:10:57.401201   17860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:10:57.401211   17860 out.go:374] Setting ErrFile to fd 2...
	I1123 08:10:57.401219   17860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:10:57.401446   17860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
	I1123 08:10:57.401909   17860 out.go:368] Setting JSON to true
	I1123 08:10:57.402732   17860 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3198,"bootTime":1763882259,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:10:57.402790   17860 start.go:143] virtualization: kvm guest
	I1123 08:10:57.404674   17860 out.go:99] [download-only-456667] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:10:57.404818   17860 notify.go:221] Checking for updates...
	I1123 08:10:57.406020   17860 out.go:171] MINIKUBE_LOCATION=21969
	I1123 08:10:57.407418   17860 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:10:57.408728   17860 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21969-13876/kubeconfig
	I1123 08:10:57.409982   17860 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-13876/.minikube
	I1123 08:10:57.411190   17860 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1123 08:10:57.413368   17860 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 08:10:57.413586   17860 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:10:57.437002   17860 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:10:57.437082   17860 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:10:57.494196   17860 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-23 08:10:57.485151031 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:10:57.494284   17860 docker.go:319] overlay module found
	I1123 08:10:57.495850   17860 out.go:99] Using the docker driver based on user configuration
	I1123 08:10:57.495880   17860 start.go:309] selected driver: docker
	I1123 08:10:57.495886   17860 start.go:927] validating driver "docker" against <nil>
	I1123 08:10:57.495957   17860 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:10:57.551410   17860 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-23 08:10:57.542680827 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:10:57.551542   17860 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:10:57.552477   17860 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1123 08:10:57.552611   17860 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 08:10:57.554393   17860 out.go:171] Using Docker driver with root privileges
	I1123 08:10:57.555690   17860 cni.go:84] Creating CNI manager for ""
	I1123 08:10:57.555748   17860 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:10:57.555759   17860 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:10:57.555826   17860 start.go:353] cluster config:
	{Name:download-only-456667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-456667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:10:57.557162   17860 out.go:99] Starting "download-only-456667" primary control-plane node in "download-only-456667" cluster
	I1123 08:10:57.557177   17860 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:10:57.558314   17860 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:10:57.558349   17860 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:10:57.558438   17860 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:10:57.574576   17860 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 08:10:57.574729   17860 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 08:10:57.574748   17860 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1123 08:10:57.574753   17860 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1123 08:10:57.574767   17860 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1123 08:10:57.651720   17860 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1123 08:10:57.651749   17860 cache.go:65] Caching tarball of preloaded images
	I1123 08:10:57.652028   17860 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:10:57.654149   17860 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1123 08:10:57.654172   17860 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1123 08:10:57.751002   17860 preload.go:295] Got checksum from GCS API "5d6e976daeaa84851976fc4d674fd8f4"
	I1123 08:10:57.751043   17860 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:5d6e976daeaa84851976fc4d674fd8f4 -> /home/jenkins/minikube-integration/21969-13876/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-456667 host does not exist
	  To start a cluster, run: "minikube start -p download-only-456667"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-456667
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-471494 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-471494" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-471494
--- PASS: TestDownloadOnlyKic (0.40s)

                                                
                                    
x
+
TestBinaryMirror (0.81s)

                                                
                                                
=== RUN   TestBinaryMirror
I1123 08:11:09.452618   17442 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-506462 --alsologtostderr --binary-mirror http://127.0.0.1:38591 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-506462" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-506462
--- PASS: TestBinaryMirror (0.81s)

                                                
                                    
x
+
TestOffline (69.27s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-721600 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-721600 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m6.823341412s)
helpers_test.go:175: Cleaning up "offline-containerd-721600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-721600
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-721600: (2.447435566s)
--- PASS: TestOffline (69.27s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-963149
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-963149: exit status 85 (61.27099ms)

                                                
                                                
-- stdout --
	* Profile "addons-963149" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-963149"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-963149
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-963149: exit status 85 (60.953632ms)

                                                
                                                
-- stdout --
	* Profile "addons-963149" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-963149"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (124.6s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-963149 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-963149 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m4.603811473s)
--- PASS: TestAddons/Setup (124.60s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.06s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:868: volcano-scheduler stabilized in 13.731873ms
addons_test.go:884: volcano-controller stabilized in 14.210418ms
addons_test.go:876: volcano-admission stabilized in 14.263736ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-x5dxm" [eb830151-e371-4088-84f5-6b487fe135e7] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003742831s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-2s9x6" [04d2f5c2-ff6e-4de8-ae20-ed757a0c3ad2] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003363267s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-85sxr" [ed1ea678-6dc6-48c6-b4e8-d6ddee266f42] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003292518s
addons_test.go:903: (dbg) Run:  kubectl --context addons-963149 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-963149 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-963149 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [30554d5e-e880-4710-86a0-0f284aac0ed2] Pending
helpers_test.go:352: "test-job-nginx-0" [30554d5e-e880-4710-86a0-0f284aac0ed2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [30554d5e-e880-4710-86a0-0f284aac0ed2] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003276107s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-963149 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-963149 addons disable volcano --alsologtostderr -v=1: (11.692515513s)
--- PASS: TestAddons/serial/Volcano (40.06s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-963149 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-963149 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.46s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-963149 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-963149 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c504434f-3316-4daa-8952-c0a9d3cabbd6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c504434f-3316-4daa-8952-c0a9d3cabbd6] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003185737s
addons_test.go:694: (dbg) Run:  kubectl --context addons-963149 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-963149 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-963149 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.46s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 15.274336ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-6q755" [14c936cf-650c-437b-895c-270ae6a9ad33] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003690941s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-dp8h8" [0a3a5583-2606-4955-a992-c0de65475cfa] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00318318s
addons_test.go:392: (dbg) Run:  kubectl --context addons-963149 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-963149 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-963149 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.009788776s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-963149 ip
2025/11/23 08:14:28 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-963149 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.84s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.67s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.003221ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-963149
addons_test.go:332: (dbg) Run:  kubectl --context addons-963149 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-963149 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.67s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-963149 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-963149 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-963149 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [e0907008-8d8c-464a-b233-233f43555c92] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [e0907008-8d8c-464a-b233-233f43555c92] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003418922s
I1123 08:14:49.258697   17442 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-963149 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-963149 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-963149 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-963149 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-963149 addons disable ingress-dns --alsologtostderr -v=1: (1.558610879s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-963149 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-963149 addons disable ingress --alsologtostderr -v=1: (7.681614754s)
--- PASS: TestAddons/parallel/Ingress (21.47s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.71s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-fnhgx" [8b24ce4d-4a8e-41b9-8f2e-25f583c0c3eb] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003197262s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-963149 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-963149 addons disable inspektor-gadget --alsologtostderr -v=1: (5.705828203s)
--- PASS: TestAddons/parallel/InspektorGadget (10.71s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.101307ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-4xpkv" [53c8705e-0f73-4a6c-ac36-06d591917f45] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003977381s
addons_test.go:463: (dbg) Run:  kubectl --context addons-963149 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-963149 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.68s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:549: csi-hostpath-driver pods stabilized in 2.78331ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-963149 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-963149 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [dbbb6960-b50f-463e-bd6d-5164cb7351b4] Pending
helpers_test.go:352: "task-pv-pod" [dbbb6960-b50f-463e-bd6d-5164cb7351b4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [dbbb6960-b50f-463e-bd6d-5164cb7351b4] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003538655s
addons_test.go:572: (dbg) Run:  kubectl --context addons-963149 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-963149 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-963149 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-963149 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-963149 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-963149 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-963149 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [6d852d45-c712-4072-b483-4b4ebaece3ac] Pending
helpers_test.go:352: "task-pv-pod-restore" [6d852d45-c712-4072-b483-4b4ebaece3ac] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [6d852d45-c712-4072-b483-4b4ebaece3ac] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003028865s
addons_test.go:614: (dbg) Run:  kubectl --context addons-963149 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-963149 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-963149 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-963149 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-963149 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-963149 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.527756867s)
--- PASS: TestAddons/parallel/CSI (52.63s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-963149 --alsologtostderr -v=1
I1123 08:14:14.485224   17442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-lp7cv" [aa0a0a96-fa06-418e-b0ff-f879a6294d18] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-lp7cv" [aa0a0a96-fa06-418e-b0ff-f879a6294d18] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-lp7cv" [aa0a0a96-fa06-418e-b0ff-f879a6294d18] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003328206s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-963149 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-963149 addons disable headlamp --alsologtostderr -v=1: (5.77494647s)
--- PASS: TestAddons/parallel/Headlamp (18.54s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-98fwd" [f02a6d35-a43c-4b21-a966-0a1f8d9f3d05] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.002726187s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-963149 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.51s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.22s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-963149 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-963149 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-963149 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [381c79b5-6071-4f5f-80af-80dd045570c8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [381c79b5-6071-4f5f-80af-80dd045570c8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [381c79b5-6071-4f5f-80af-80dd045570c8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.011002048s
addons_test.go:967: (dbg) Run:  kubectl --context addons-963149 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-963149 ssh "cat /opt/local-path-provisioner/pvc-5e81bb24-3ac8-4069-b795-f74ea8f3dc86_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-963149 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-963149 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-963149 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.22s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-f62ln" [86899ef9-0770-4b30-98e4-c4b65c81eeef] Running
I1123 08:14:14.487948   17442 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1123 08:14:14.487975   17442 kapi.go:107] duration metric: took 2.771263ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003470134s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-963149 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-qdh75" [18b09ba8-356a-493e-8520-ba718d518267] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003201791s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-963149 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-963149 addons disable yakd --alsologtostderr -v=1: (5.645430781s)
--- PASS: TestAddons/parallel/Yakd (10.65s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-tjgrt" [c121fba5-dd75-4a04-9570-233d66d3b78c] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.00611842s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-963149 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.51s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.62s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-963149
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-963149: (12.339890384s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-963149
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-963149
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-963149
--- PASS: TestAddons/StoppedEnableDisable (12.62s)

                                                
                                    
x
+
TestCertOptions (30.29s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-194967 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-194967 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (27.154878855s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-194967 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-194967 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-194967 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-194967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-194967
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-194967: (2.457679962s)
--- PASS: TestCertOptions (30.29s)

                                                
                                    
x
+
TestCertExpiration (213.84s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-680868 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-680868 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (24.023666838s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-680868 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-680868 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (5.926578045s)
helpers_test.go:175: Cleaning up "cert-expiration-680868" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-680868
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-680868: (3.885086917s)
--- PASS: TestCertExpiration (213.84s)

                                                
                                    
x
+
TestForceSystemdFlag (27.23s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-570956 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-570956 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (24.512272982s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-570956 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-570956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-570956
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-570956: (2.377199524s)
--- PASS: TestForceSystemdFlag (27.23s)

                                                
                                    
x
+
TestForceSystemdEnv (26.58s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-352249 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-352249 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (23.792787944s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-352249 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-352249" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-352249
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-352249: (2.501399392s)
--- PASS: TestForceSystemdEnv (26.58s)

                                                
                                    
x
+
TestDockerEnvContainerd (35.35s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-631229 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-631229 --driver=docker  --container-runtime=containerd: (19.227570308s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-631229"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXjl61CC/agent.41000" SSH_AGENT_PID="41001" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXjl61CC/agent.41000" SSH_AGENT_PID="41001" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXjl61CC/agent.41000" SSH_AGENT_PID="41001" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.88270772s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXjl61CC/agent.41000" SSH_AGENT_PID="41001" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-631229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-631229
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-631229: (2.317855989s)
--- PASS: TestDockerEnvContainerd (35.35s)

                                                
                                    
x
+
TestErrorSpam/setup (21.51s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-612283 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-612283 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-612283 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-612283 --driver=docker  --container-runtime=containerd: (21.506690043s)
--- PASS: TestErrorSpam/setup (21.51s)

                                                
                                    
x
+
TestErrorSpam/start (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612283 --log_dir /tmp/nospam-612283 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612283 --log_dir /tmp/nospam-612283 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612283 --log_dir /tmp/nospam-612283 start --dry-run
--- PASS: TestErrorSpam/start (0.66s)

                                                
                                    
x
+
TestErrorSpam/status (0.97s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612283 --log_dir /tmp/nospam-612283 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612283 --log_dir /tmp/nospam-612283 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612283 --log_dir /tmp/nospam-612283 status
--- PASS: TestErrorSpam/status (0.97s)

                                                
                                    
x
+
TestErrorSpam/pause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612283 --log_dir /tmp/nospam-612283 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612283 --log_dir /tmp/nospam-612283 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612283 --log_dir /tmp/nospam-612283 pause
--- PASS: TestErrorSpam/pause (1.44s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612283 --log_dir /tmp/nospam-612283 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612283 --log_dir /tmp/nospam-612283 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612283 --log_dir /tmp/nospam-612283 unpause
--- PASS: TestErrorSpam/unpause (1.51s)

                                                
                                    
x
+
TestErrorSpam/stop (2.18s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612283 --log_dir /tmp/nospam-612283 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-612283 --log_dir /tmp/nospam-612283 stop: (1.974168311s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612283 --log_dir /tmp/nospam-612283 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612283 --log_dir /tmp/nospam-612283 stop
--- PASS: TestErrorSpam/stop (2.18s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21969-13876/.minikube/files/etc/test/nested/copy/17442/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.24s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-614508 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-614508 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (38.238437219s)
--- PASS: TestFunctional/serial/StartWithProxy (38.24s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.89s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1123 08:17:11.754019   17442 config.go:182] Loaded profile config "functional-614508": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-614508 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-614508 --alsologtostderr -v=8: (5.890534135s)
functional_test.go:678: soft start took 5.891685904s for "functional-614508" cluster.
I1123 08:17:17.646318   17442 config.go:182] Loaded profile config "functional-614508": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (5.89s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-614508 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-614508 cache add registry.k8s.io/pause:3.3: (1.089136082s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-614508 /tmp/TestFunctionalserialCacheCmdcacheadd_local1952747448/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 cache add minikube-local-cache-test:functional-614508
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-614508 cache add minikube-local-cache-test:functional-614508: (1.558354571s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 cache delete minikube-local-cache-test:functional-614508
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-614508
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-614508 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (290.174474ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 kubectl -- --context functional-614508 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-614508 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (48.98s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-614508 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-614508 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (48.977826108s)
functional_test.go:776: restart took 48.977962515s for "functional-614508" cluster.
I1123 08:18:13.976480   17442 config.go:182] Loaded profile config "functional-614508": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (48.98s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-614508 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 logs
E1123 08:18:14.930246   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:14.936705   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:14.948281   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:14.969767   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:15.011915   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:15.093436   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:15.254750   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-614508 logs: (1.214575585s)
--- PASS: TestFunctional/serial/LogsCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 logs --file /tmp/TestFunctionalserialLogsFileCmd3030683325/001/logs.txt
E1123 08:18:15.576740   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:16.218848   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-614508 logs --file /tmp/TestFunctionalserialLogsFileCmd3030683325/001/logs.txt: (1.245152282s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.02s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-614508 apply -f testdata/invalidsvc.yaml
E1123 08:18:17.500521   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-614508
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-614508: exit status 115 (346.975739ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31396 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-614508 delete -f testdata/invalidsvc.yaml
E1123 08:18:20.062539   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/InvalidService (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-614508 config get cpus: exit status 14 (92.005676ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-614508 config get cpus: exit status 14 (72.308975ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-614508 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-614508 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 62397: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.69s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-614508 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-614508 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (159.695633ms)

                                                
                                                
-- stdout --
	* [functional-614508] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-13876/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-13876/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:18:43.415799   61850 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:18:43.415917   61850 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:18:43.415926   61850 out.go:374] Setting ErrFile to fd 2...
	I1123 08:18:43.415933   61850 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:18:43.416133   61850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
	I1123 08:18:43.416560   61850 out.go:368] Setting JSON to false
	I1123 08:18:43.417524   61850 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3664,"bootTime":1763882259,"procs":259,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:18:43.417582   61850 start.go:143] virtualization: kvm guest
	I1123 08:18:43.419715   61850 out.go:179] * [functional-614508] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:18:43.421416   61850 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:18:43.421469   61850 notify.go:221] Checking for updates...
	I1123 08:18:43.423841   61850 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:18:43.425085   61850 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-13876/kubeconfig
	I1123 08:18:43.426277   61850 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-13876/.minikube
	I1123 08:18:43.427318   61850 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:18:43.428548   61850 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:18:43.430139   61850 config.go:182] Loaded profile config "functional-614508": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:18:43.430696   61850 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:18:43.452668   61850 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:18:43.452749   61850 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:18:43.509830   61850 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-23 08:18:43.500719164 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:18:43.509932   61850 docker.go:319] overlay module found
	I1123 08:18:43.511522   61850 out.go:179] * Using the docker driver based on existing profile
	I1123 08:18:43.512594   61850 start.go:309] selected driver: docker
	I1123 08:18:43.512616   61850 start.go:927] validating driver "docker" against &{Name:functional-614508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-614508 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:18:43.512744   61850 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:18:43.514409   61850 out.go:203] 
	W1123 08:18:43.515684   61850 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1123 08:18:43.516995   61850 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-614508 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-614508 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-614508 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (159.114904ms)

                                                
                                                
-- stdout --
	* [functional-614508] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-13876/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-13876/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:18:43.804533   62073 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:18:43.804628   62073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:18:43.804636   62073 out.go:374] Setting ErrFile to fd 2...
	I1123 08:18:43.804640   62073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:18:43.804960   62073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
	I1123 08:18:43.805386   62073 out.go:368] Setting JSON to false
	I1123 08:18:43.806445   62073 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3665,"bootTime":1763882259,"procs":259,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:18:43.806499   62073 start.go:143] virtualization: kvm guest
	I1123 08:18:43.808418   62073 out.go:179] * [functional-614508] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1123 08:18:43.810275   62073 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:18:43.810316   62073 notify.go:221] Checking for updates...
	I1123 08:18:43.812951   62073 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:18:43.814196   62073 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-13876/kubeconfig
	I1123 08:18:43.815380   62073 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-13876/.minikube
	I1123 08:18:43.816626   62073 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:18:43.817827   62073 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:18:43.819544   62073 config.go:182] Loaded profile config "functional-614508": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:18:43.820054   62073 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:18:43.843972   62073 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:18:43.844077   62073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:18:43.898381   62073 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:61 SystemTime:2025-11-23 08:18:43.889431547 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:18:43.898497   62073 docker.go:319] overlay module found
	I1123 08:18:43.900185   62073 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1123 08:18:43.901428   62073 start.go:309] selected driver: docker
	I1123 08:18:43.901442   62073 start.go:927] validating driver "docker" against &{Name:functional-614508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-614508 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:18:43.901517   62073 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:18:43.903255   62073 out.go:203] 
	W1123 08:18:43.904435   62073 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1123 08:18:43.905458   62073 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-614508 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-614508 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-lxqjt" [abaaede9-d812-4ae5-82b8-8a56f2678f93] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-lxqjt" [abaaede9-d812-4ae5-82b8-8a56f2678f93] Running
E1123 08:18:55.908047   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003496472s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31936
functional_test.go:1680: http://192.168.49.2:31936: success! body:
Request served by hello-node-connect-7d85dfc575-lxqjt

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31936
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (37.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [f247870f-a160-4cee-963d-683cf894fca1] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003444371s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-614508 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-614508 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-614508 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-614508 apply -f testdata/storage-provisioner/pod.yaml
I1123 08:18:28.194249   17442 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [7664703b-2853-49ee-9bf6-81fe361ea496] Pending
helpers_test.go:352: "sp-pod" [7664703b-2853-49ee-9bf6-81fe361ea496] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [7664703b-2853-49ee-9bf6-81fe361ea496] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.003937803s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-614508 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-614508 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-614508 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [7713f95a-754c-40a1-a5e4-53948171c71f] Pending
helpers_test.go:352: "sp-pod" [7713f95a-754c-40a1-a5e4-53948171c71f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [7713f95a-754c-40a1-a5e4-53948171c71f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.00360776s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-614508 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (37.35s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh -n functional-614508 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 cp functional-614508:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd592687576/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh -n functional-614508 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh -n functional-614508 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-614508 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-vhr97" [72ec9d75-2118-4d9b-bcd4-01543826102c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-vhr97" [72ec9d75-2118-4d9b-bcd4-01543826102c] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.003361263s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-614508 exec mysql-5bb876957f-vhr97 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-614508 exec mysql-5bb876957f-vhr97 -- mysql -ppassword -e "show databases;": exit status 1 (130.171481ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1123 08:18:39.621870   17442 retry.go:31] will retry after 792.471793ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-614508 exec mysql-5bb876957f-vhr97 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-614508 exec mysql-5bb876957f-vhr97 -- mysql -ppassword -e "show databases;": exit status 1 (106.261673ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1123 08:18:40.521279   17442 retry.go:31] will retry after 1.030423367s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-614508 exec mysql-5bb876957f-vhr97 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-614508 exec mysql-5bb876957f-vhr97 -- mysql -ppassword -e "show databases;": exit status 1 (109.013553ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1123 08:18:41.661866   17442 retry.go:31] will retry after 1.38084669s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-614508 exec mysql-5bb876957f-vhr97 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.85s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/17442/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh "sudo cat /etc/test/nested/copy/17442/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/17442.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh "sudo cat /etc/ssl/certs/17442.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/17442.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh "sudo cat /usr/share/ca-certificates/17442.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/174422.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh "sudo cat /etc/ssl/certs/174422.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/174422.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh "sudo cat /usr/share/ca-certificates/174422.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-614508 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-614508 ssh "sudo systemctl is-active docker": exit status 1 (287.195998ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-614508 ssh "sudo systemctl is-active crio": exit status 1 (288.027196ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-614508 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-614508 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-7ssvv" [d90f1fe1-b265-4803-b7b6-679fcaef9c2e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-7ssvv" [d90f1fe1-b265-4803-b7b6-679fcaef9c2e] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.00403894s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 update-context --alsologtostderr -v=2
2025/11/23 08:18:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-614508 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-614508 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-614508 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-614508 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 58805: os: process already finished
helpers_test.go:519: unable to terminate pid 58623: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-614508 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (19.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-614508 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [6c84ffa4-6dd6-43f5-b3f4-9f3410420834] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1123 08:18:25.184565   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "nginx-svc" [6c84ffa4-6dd6-43f5-b3f4-9f3410420834] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 19.003342325s
I1123 08:18:43.135654   17442 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (19.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 service list -o json
functional_test.go:1504: Took "540.290511ms" to run "out/minikube-linux-amd64 -p functional-614508 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31307
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31307
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "351.424072ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "134.989596ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "343.954315ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "63.951729ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-614508 /tmp/TestFunctionalparallelMountCmdany-port923683870/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763885913434982963" to /tmp/TestFunctionalparallelMountCmdany-port923683870/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763885913434982963" to /tmp/TestFunctionalparallelMountCmdany-port923683870/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763885913434982963" to /tmp/TestFunctionalparallelMountCmdany-port923683870/001/test-1763885913434982963
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-614508 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (299.402917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 08:18:33.734768   17442 retry.go:31] will retry after 642.198269ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 23 08:18 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 23 08:18 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 23 08:18 test-1763885913434982963
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh cat /mount-9p/test-1763885913434982963
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-614508 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [1b935d0e-2036-490b-8977-dbd0e1f0e648] Pending
E1123 08:18:35.426753   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox-mount" [1b935d0e-2036-490b-8977-dbd0e1f0e648] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [1b935d0e-2036-490b-8977-dbd0e1f0e648] Running
helpers_test.go:352: "busybox-mount" [1b935d0e-2036-490b-8977-dbd0e1f0e648] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [1b935d0e-2036-490b-8977-dbd0e1f0e648] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 10.003695303s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-614508 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-614508 /tmp/TestFunctionalparallelMountCmdany-port923683870/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-614508 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-614508 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-614508
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-614508
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-614508 image ls --format short --alsologtostderr:
I1123 08:18:51.727672   65633 out.go:360] Setting OutFile to fd 1 ...
I1123 08:18:51.727815   65633 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:18:51.727826   65633 out.go:374] Setting ErrFile to fd 2...
I1123 08:18:51.727833   65633 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:18:51.728163   65633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
I1123 08:18:51.728979   65633 config.go:182] Loaded profile config "functional-614508": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:18:51.729124   65633 config.go:182] Loaded profile config "functional-614508": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:18:51.729767   65633 cli_runner.go:164] Run: docker container inspect functional-614508 --format={{.State.Status}}
I1123 08:18:51.752449   65633 ssh_runner.go:195] Run: systemctl --version
I1123 08:18:51.752528   65633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-614508
I1123 08:18:51.772605   65633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/functional-614508/id_rsa Username:docker}
I1123 08:18:51.874921   65633 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-614508 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                     │ alpine             │ sha256:d4918c │ 22.6MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:fc2517 │ 26MB   │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:c3994b │ 27.1MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:7dd6aa │ 17.4MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ docker.io/library/nginx                     │ latest             │ sha256:60adc2 │ 59.8MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:c80c8d │ 22.8MB │
│ docker.io/kicbase/echo-server               │ functional-614508  │ sha256:9056ab │ 2.37MB │
│ docker.io/library/minikube-local-cache-test │ functional-614508  │ sha256:d8667a │ 992B   │
│ docker.io/library/mysql                     │ 5.7                │ sha256:510733 │ 138MB  │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-614508 image ls --format table --alsologtostderr:
I1123 08:18:52.479467   66103 out.go:360] Setting OutFile to fd 1 ...
I1123 08:18:52.479589   66103 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:18:52.479597   66103 out.go:374] Setting ErrFile to fd 2...
I1123 08:18:52.479601   66103 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:18:52.479828   66103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
I1123 08:18:52.480365   66103 config.go:182] Loaded profile config "functional-614508": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:18:52.480473   66103 config.go:182] Loaded profile config "functional-614508": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:18:52.480955   66103 cli_runner.go:164] Run: docker container inspect functional-614508 --format={{.State.Status}}
I1123 08:18:52.501210   66103 ssh_runner.go:195] Run: systemctl --version
I1123 08:18:52.501278   66103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-614508
I1123 08:18:52.521232   66103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/functional-614508/id_rsa Username:docker}
I1123 08:18:52.624104   66103 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-614508 image ls --format json --alsologtostderr:
[{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"27061991"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-614508"],"size":"2372971"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de
530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:d8667ad3311087d30fa6b490693d29acc1cf4ced7625ad483add67f5cdf32ef2","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-614508"],"size":"992"},{"id":"sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32
f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22631814"},{"id":"sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"59772801"},{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"22820214"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pau
se:3.1"],"size":"315399"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441
e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"25963718"},{"id":"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"17385568"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-614508 image ls --format json --alsologtostderr:
I1123 08:18:52.245778   65944 out.go:360] Setting OutFile to fd 1 ...
I1123 08:18:52.245881   65944 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:18:52.245890   65944 out.go:374] Setting ErrFile to fd 2...
I1123 08:18:52.245894   65944 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:18:52.246095   65944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
I1123 08:18:52.246664   65944 config.go:182] Loaded profile config "functional-614508": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:18:52.246769   65944 config.go:182] Loaded profile config "functional-614508": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:18:52.247215   65944 cli_runner.go:164] Run: docker container inspect functional-614508 --format={{.State.Status}}
I1123 08:18:52.269245   65944 ssh_runner.go:195] Run: systemctl --version
I1123 08:18:52.269309   65944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-614508
I1123 08:18:52.290277   65944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/functional-614508/id_rsa Username:docker}
I1123 08:18:52.391155   65944 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-614508 image ls --format yaml --alsologtostderr:
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "17385568"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "27061991"
- id: sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "22820214"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "22631814"
- id: sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "59772801"
- id: sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "25963718"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-614508
size: "2372971"
- id: sha256:d8667ad3311087d30fa6b490693d29acc1cf4ced7625ad483add67f5cdf32ef2
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-614508
size: "992"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-614508 image ls --format yaml --alsologtostderr:
I1123 08:18:51.986034   65789 out.go:360] Setting OutFile to fd 1 ...
I1123 08:18:51.986150   65789 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:18:51.986160   65789 out.go:374] Setting ErrFile to fd 2...
I1123 08:18:51.986167   65789 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:18:51.986428   65789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
I1123 08:18:51.986979   65789 config.go:182] Loaded profile config "functional-614508": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:18:51.987070   65789 config.go:182] Loaded profile config "functional-614508": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:18:51.987487   65789 cli_runner.go:164] Run: docker container inspect functional-614508 --format={{.State.Status}}
I1123 08:18:52.009286   65789 ssh_runner.go:195] Run: systemctl --version
I1123 08:18:52.009352   65789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-614508
I1123 08:18:52.031435   65789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/functional-614508/id_rsa Username:docker}
I1123 08:18:52.143351   65789 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-614508 ssh pgrep buildkitd: exit status 1 (289.294948ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 image build -t localhost/my-image:functional-614508 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-614508 image build -t localhost/my-image:functional-614508 testdata/build --alsologtostderr: (3.482044398s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-614508 image build -t localhost/my-image:functional-614508 testdata/build --alsologtostderr:
I1123 08:18:52.502743   66110 out.go:360] Setting OutFile to fd 1 ...
I1123 08:18:52.503042   66110 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:18:52.503053   66110 out.go:374] Setting ErrFile to fd 2...
I1123 08:18:52.503057   66110 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:18:52.503283   66110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
I1123 08:18:52.503877   66110 config.go:182] Loaded profile config "functional-614508": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:18:52.504623   66110 config.go:182] Loaded profile config "functional-614508": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:18:52.505102   66110 cli_runner.go:164] Run: docker container inspect functional-614508 --format={{.State.Status}}
I1123 08:18:52.524078   66110 ssh_runner.go:195] Run: systemctl --version
I1123 08:18:52.524122   66110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-614508
I1123 08:18:52.544541   66110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/functional-614508/id_rsa Username:docker}
I1123 08:18:52.644295   66110 build_images.go:162] Building image from path: /tmp/build.2366430872.tar
I1123 08:18:52.644401   66110 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1123 08:18:52.659826   66110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2366430872.tar
I1123 08:18:52.664380   66110 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2366430872.tar: stat -c "%s %y" /var/lib/minikube/build/build.2366430872.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2366430872.tar': No such file or directory
I1123 08:18:52.664409   66110 ssh_runner.go:362] scp /tmp/build.2366430872.tar --> /var/lib/minikube/build/build.2366430872.tar (3072 bytes)
I1123 08:18:52.682737   66110 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2366430872
I1123 08:18:52.690450   66110 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2366430872 -xf /var/lib/minikube/build/build.2366430872.tar
I1123 08:18:52.698360   66110 containerd.go:394] Building image: /var/lib/minikube/build/build.2366430872
I1123 08:18:52.698434   66110 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2366430872 --local dockerfile=/var/lib/minikube/build/build.2366430872 --output type=image,name=localhost/my-image:functional-614508
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:ef0f4defec0f1b05540250fc737111dc0db35690529b397f189fd9fb93aa6eb5 done
#8 exporting config sha256:71bcfc4674e11a6676b10947dd71ae71fb4d879c525dc27197b53db1e4af1e8e done
#8 naming to localhost/my-image:functional-614508
#8 naming to localhost/my-image:functional-614508 done
#8 DONE 0.1s
I1123 08:18:55.902779   66110 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2366430872 --local dockerfile=/var/lib/minikube/build/build.2366430872 --output type=image,name=localhost/my-image:functional-614508: (3.204318277s)
I1123 08:18:55.902849   66110 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2366430872
I1123 08:18:55.911436   66110 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2366430872.tar
I1123 08:18:55.919285   66110 build_images.go:218] Built localhost/my-image:functional-614508 from /tmp/build.2366430872.tar
I1123 08:18:55.919318   66110 build_images.go:134] succeeded building to: functional-614508
I1123 08:18:55.919324   66110 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.751396178s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-614508
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.148.79 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-614508 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 image load --daemon kicbase/echo-server:functional-614508 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 image load --daemon kicbase/echo-server:functional-614508 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-614508 /tmp/TestFunctionalparallelMountCmdspecific-port1497939192/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-614508 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (316.190872ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 08:18:46.752686   17442 retry.go:31] will retry after 291.972754ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh "findmnt -T /mount-9p | grep 9p"
I1123 08:18:47.050818   17442 detect.go:223] nested VM detected
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-614508 /tmp/TestFunctionalparallelMountCmdspecific-port1497939192/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-614508 /tmp/TestFunctionalparallelMountCmdspecific-port1497939192/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-614508
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 image load --daemon kicbase/echo-server:functional-614508 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-614508 image load --daemon kicbase/echo-server:functional-614508 --alsologtostderr: (1.018609078s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-614508 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1775840806/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-614508 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1775840806/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-614508 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1775840806/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-614508 ssh "findmnt -T" /mount1: exit status 1 (414.265191ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 08:18:48.606932   17442 retry.go:31] will retry after 353.471218ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-614508 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-614508 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1775840806/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-614508 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1775840806/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-614508 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1775840806/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 image save kicbase/echo-server:functional-614508 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 image rm kicbase/echo-server:functional-614508 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-614508
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-614508 image save --daemon kicbase/echo-server:functional-614508 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-614508
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-614508
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-614508
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-614508
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (131.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1123 08:19:36.870195   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:20:58.791985   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-419063 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m10.885310619s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (131.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-419063 kubectl -- rollout status deployment/busybox: (3.778905608s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 kubectl -- exec busybox-7b57f96db7-7x4z9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 kubectl -- exec busybox-7b57f96db7-q944k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 kubectl -- exec busybox-7b57f96db7-z7qpr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 kubectl -- exec busybox-7b57f96db7-7x4z9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 kubectl -- exec busybox-7b57f96db7-q944k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 kubectl -- exec busybox-7b57f96db7-z7qpr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 kubectl -- exec busybox-7b57f96db7-7x4z9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 kubectl -- exec busybox-7b57f96db7-q944k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 kubectl -- exec busybox-7b57f96db7-z7qpr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 kubectl -- exec busybox-7b57f96db7-7x4z9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 kubectl -- exec busybox-7b57f96db7-7x4z9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 kubectl -- exec busybox-7b57f96db7-q944k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 kubectl -- exec busybox-7b57f96db7-q944k -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 kubectl -- exec busybox-7b57f96db7-z7qpr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 kubectl -- exec busybox-7b57f96db7-z7qpr -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-419063 node add --alsologtostderr -v 5: (23.085472053s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-419063 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 cp testdata/cp-test.txt ha-419063:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 cp ha-419063:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3955037485/001/cp-test_ha-419063.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 cp ha-419063:/home/docker/cp-test.txt ha-419063-m02:/home/docker/cp-test_ha-419063_ha-419063-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m02 "sudo cat /home/docker/cp-test_ha-419063_ha-419063-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 cp ha-419063:/home/docker/cp-test.txt ha-419063-m03:/home/docker/cp-test_ha-419063_ha-419063-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m03 "sudo cat /home/docker/cp-test_ha-419063_ha-419063-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 cp ha-419063:/home/docker/cp-test.txt ha-419063-m04:/home/docker/cp-test_ha-419063_ha-419063-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m04 "sudo cat /home/docker/cp-test_ha-419063_ha-419063-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 cp testdata/cp-test.txt ha-419063-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 cp ha-419063-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3955037485/001/cp-test_ha-419063-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 cp ha-419063-m02:/home/docker/cp-test.txt ha-419063:/home/docker/cp-test_ha-419063-m02_ha-419063.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063 "sudo cat /home/docker/cp-test_ha-419063-m02_ha-419063.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 cp ha-419063-m02:/home/docker/cp-test.txt ha-419063-m03:/home/docker/cp-test_ha-419063-m02_ha-419063-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m03 "sudo cat /home/docker/cp-test_ha-419063-m02_ha-419063-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 cp ha-419063-m02:/home/docker/cp-test.txt ha-419063-m04:/home/docker/cp-test_ha-419063-m02_ha-419063-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m04 "sudo cat /home/docker/cp-test_ha-419063-m02_ha-419063-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 cp testdata/cp-test.txt ha-419063-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 cp ha-419063-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3955037485/001/cp-test_ha-419063-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 cp ha-419063-m03:/home/docker/cp-test.txt ha-419063:/home/docker/cp-test_ha-419063-m03_ha-419063.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063 "sudo cat /home/docker/cp-test_ha-419063-m03_ha-419063.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 cp ha-419063-m03:/home/docker/cp-test.txt ha-419063-m02:/home/docker/cp-test_ha-419063-m03_ha-419063-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m02 "sudo cat /home/docker/cp-test_ha-419063-m03_ha-419063-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 cp ha-419063-m03:/home/docker/cp-test.txt ha-419063-m04:/home/docker/cp-test_ha-419063-m03_ha-419063-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m04 "sudo cat /home/docker/cp-test_ha-419063-m03_ha-419063-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 cp testdata/cp-test.txt ha-419063-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 cp ha-419063-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3955037485/001/cp-test_ha-419063-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 cp ha-419063-m04:/home/docker/cp-test.txt ha-419063:/home/docker/cp-test_ha-419063-m04_ha-419063.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063 "sudo cat /home/docker/cp-test_ha-419063-m04_ha-419063.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 cp ha-419063-m04:/home/docker/cp-test.txt ha-419063-m02:/home/docker/cp-test_ha-419063-m04_ha-419063-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m02 "sudo cat /home/docker/cp-test_ha-419063-m04_ha-419063-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 cp ha-419063-m04:/home/docker/cp-test.txt ha-419063-m03:/home/docker/cp-test_ha-419063-m04_ha-419063-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 ssh -n ha-419063-m03 "sudo cat /home/docker/cp-test_ha-419063-m04_ha-419063-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-419063 node stop m02 --alsologtostderr -v 5: (12.033854935s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-419063 status --alsologtostderr -v 5: exit status 7 (730.226373ms)

                                                
                                                
-- stdout --
	ha-419063
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-419063-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-419063-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-419063-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:22:17.318612   87392 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:22:17.318882   87392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:17.318892   87392 out.go:374] Setting ErrFile to fd 2...
	I1123 08:22:17.318895   87392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:17.319203   87392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
	I1123 08:22:17.319367   87392 out.go:368] Setting JSON to false
	I1123 08:22:17.319392   87392 mustload.go:66] Loading cluster: ha-419063
	I1123 08:22:17.319446   87392 notify.go:221] Checking for updates...
	I1123 08:22:17.319861   87392 config.go:182] Loaded profile config "ha-419063": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:22:17.319891   87392 status.go:174] checking status of ha-419063 ...
	I1123 08:22:17.320330   87392 cli_runner.go:164] Run: docker container inspect ha-419063 --format={{.State.Status}}
	I1123 08:22:17.341372   87392 status.go:371] ha-419063 host status = "Running" (err=<nil>)
	I1123 08:22:17.341398   87392 host.go:66] Checking if "ha-419063" exists ...
	I1123 08:22:17.341674   87392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-419063
	I1123 08:22:17.361324   87392 host.go:66] Checking if "ha-419063" exists ...
	I1123 08:22:17.361610   87392 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:22:17.361682   87392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-419063
	I1123 08:22:17.380443   87392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/ha-419063/id_rsa Username:docker}
	I1123 08:22:17.480178   87392 ssh_runner.go:195] Run: systemctl --version
	I1123 08:22:17.486716   87392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:22:17.499983   87392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:22:17.560937   87392 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 08:22:17.550877644 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:22:17.561494   87392 kubeconfig.go:125] found "ha-419063" server: "https://192.168.49.254:8443"
	I1123 08:22:17.561522   87392 api_server.go:166] Checking apiserver status ...
	I1123 08:22:17.561555   87392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:22:17.574206   87392 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1342/cgroup
	W1123 08:22:17.583442   87392 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1342/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:22:17.583496   87392 ssh_runner.go:195] Run: ls
	I1123 08:22:17.587638   87392 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 08:22:17.592043   87392 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 08:22:17.592073   87392 status.go:463] ha-419063 apiserver status = Running (err=<nil>)
	I1123 08:22:17.592082   87392 status.go:176] ha-419063 status: &{Name:ha-419063 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:22:17.592105   87392 status.go:174] checking status of ha-419063-m02 ...
	I1123 08:22:17.592395   87392 cli_runner.go:164] Run: docker container inspect ha-419063-m02 --format={{.State.Status}}
	I1123 08:22:17.613181   87392 status.go:371] ha-419063-m02 host status = "Stopped" (err=<nil>)
	I1123 08:22:17.613211   87392 status.go:384] host is not running, skipping remaining checks
	I1123 08:22:17.613219   87392 status.go:176] ha-419063-m02 status: &{Name:ha-419063-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:22:17.613240   87392 status.go:174] checking status of ha-419063-m03 ...
	I1123 08:22:17.613481   87392 cli_runner.go:164] Run: docker container inspect ha-419063-m03 --format={{.State.Status}}
	I1123 08:22:17.633239   87392 status.go:371] ha-419063-m03 host status = "Running" (err=<nil>)
	I1123 08:22:17.633263   87392 host.go:66] Checking if "ha-419063-m03" exists ...
	I1123 08:22:17.633529   87392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-419063-m03
	I1123 08:22:17.652172   87392 host.go:66] Checking if "ha-419063-m03" exists ...
	I1123 08:22:17.652477   87392 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:22:17.652546   87392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-419063-m03
	I1123 08:22:17.672201   87392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/ha-419063-m03/id_rsa Username:docker}
	I1123 08:22:17.772979   87392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:22:17.786129   87392 kubeconfig.go:125] found "ha-419063" server: "https://192.168.49.254:8443"
	I1123 08:22:17.786155   87392 api_server.go:166] Checking apiserver status ...
	I1123 08:22:17.786192   87392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:22:17.797639   87392 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1324/cgroup
	W1123 08:22:17.806317   87392 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1324/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:22:17.806367   87392 ssh_runner.go:195] Run: ls
	I1123 08:22:17.810166   87392 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 08:22:17.815532   87392 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 08:22:17.815564   87392 status.go:463] ha-419063-m03 apiserver status = Running (err=<nil>)
	I1123 08:22:17.815576   87392 status.go:176] ha-419063-m03 status: &{Name:ha-419063-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:22:17.815596   87392 status.go:174] checking status of ha-419063-m04 ...
	I1123 08:22:17.815971   87392 cli_runner.go:164] Run: docker container inspect ha-419063-m04 --format={{.State.Status}}
	I1123 08:22:17.834356   87392 status.go:371] ha-419063-m04 host status = "Running" (err=<nil>)
	I1123 08:22:17.834377   87392 host.go:66] Checking if "ha-419063-m04" exists ...
	I1123 08:22:17.834624   87392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-419063-m04
	I1123 08:22:17.853038   87392 host.go:66] Checking if "ha-419063-m04" exists ...
	I1123 08:22:17.853304   87392 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:22:17.853356   87392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-419063-m04
	I1123 08:22:17.871900   87392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/ha-419063-m04/id_rsa Username:docker}
	I1123 08:22:17.972094   87392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:22:17.984293   87392 status.go:176] ha-419063-m04 status: &{Name:ha-419063-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-419063 node start m02 --alsologtostderr -v 5: (7.74043929s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (92.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-419063 stop --alsologtostderr -v 5: (37.326450963s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 start --wait true --alsologtostderr -v 5
E1123 08:23:14.929870   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:23:20.724415   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:23:20.730904   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:23:20.742401   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:23:20.764204   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:23:20.805989   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:23:20.887880   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:23:21.049425   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:23:21.371186   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:23:22.013182   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:23:23.294889   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:23:25.856385   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:23:30.978935   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:23:41.220394   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:23:42.634044   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-419063 start --wait true --alsologtostderr -v 5: (55.128032849s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (92.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 node delete m03 --alsologtostderr -v 5
E1123 08:24:01.702657   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-419063 node delete m03 --alsologtostderr -v 5: (8.613344305s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 stop --alsologtostderr -v 5
E1123 08:24:42.665428   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-419063 stop --alsologtostderr -v 5: (36.060959192s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-419063 status --alsologtostderr -v 5: exit status 7 (116.367204ms)

                                                
                                                
-- stdout --
	ha-419063
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-419063-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-419063-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:24:47.264885  103651 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:24:47.265170  103651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:24:47.265181  103651 out.go:374] Setting ErrFile to fd 2...
	I1123 08:24:47.265185  103651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:24:47.265388  103651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
	I1123 08:24:47.265542  103651 out.go:368] Setting JSON to false
	I1123 08:24:47.265565  103651 mustload.go:66] Loading cluster: ha-419063
	I1123 08:24:47.265631  103651 notify.go:221] Checking for updates...
	I1123 08:24:47.265967  103651 config.go:182] Loaded profile config "ha-419063": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:24:47.265985  103651 status.go:174] checking status of ha-419063 ...
	I1123 08:24:47.266377  103651 cli_runner.go:164] Run: docker container inspect ha-419063 --format={{.State.Status}}
	I1123 08:24:47.285482  103651 status.go:371] ha-419063 host status = "Stopped" (err=<nil>)
	I1123 08:24:47.285506  103651 status.go:384] host is not running, skipping remaining checks
	I1123 08:24:47.285512  103651 status.go:176] ha-419063 status: &{Name:ha-419063 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:24:47.285533  103651 status.go:174] checking status of ha-419063-m02 ...
	I1123 08:24:47.285816  103651 cli_runner.go:164] Run: docker container inspect ha-419063-m02 --format={{.State.Status}}
	I1123 08:24:47.303834  103651 status.go:371] ha-419063-m02 host status = "Stopped" (err=<nil>)
	I1123 08:24:47.303857  103651 status.go:384] host is not running, skipping remaining checks
	I1123 08:24:47.303863  103651 status.go:176] ha-419063-m02 status: &{Name:ha-419063-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:24:47.303886  103651 status.go:174] checking status of ha-419063-m04 ...
	I1123 08:24:47.304125  103651 cli_runner.go:164] Run: docker container inspect ha-419063-m04 --format={{.State.Status}}
	I1123 08:24:47.322131  103651 status.go:371] ha-419063-m04 host status = "Stopped" (err=<nil>)
	I1123 08:24:47.322179  103651 status.go:384] host is not running, skipping remaining checks
	I1123 08:24:47.322190  103651 status.go:176] ha-419063-m04 status: &{Name:ha-419063-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (57.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-419063 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (56.362636806s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (57.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (35.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 node add --control-plane --alsologtostderr -v 5
E1123 08:26:04.587233   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-419063 node add --control-plane --alsologtostderr -v 5: (35.076713271s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-419063 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (35.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.61s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-542778 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-542778 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (38.614271217s)
--- PASS: TestJSONOutput/start/Command (38.61s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-542778 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-542778 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-542778 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-542778 --output=json --user=testUser: (5.849241703s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-531458 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-531458 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (83.739322ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"64de09c0-2764-460b-8ece-2ec5aab73257","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-531458] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f2ab89b3-88fd-4053-b974-c8381e3c7bd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21969"}}
	{"specversion":"1.0","id":"54885ac2-0621-4aa4-a3cb-109a7c84addf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"73507ebc-d840-462a-87d0-f66ca01564a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21969-13876/kubeconfig"}}
	{"specversion":"1.0","id":"2c91fa4e-663e-4f02-b82a-758ec51af3f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-13876/.minikube"}}
	{"specversion":"1.0","id":"013b303e-c98f-4f8a-bb4d-e9b88212ab9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ba3b5717-cf6a-4c98-b24b-941efb833907","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"36f57755-6b46-4cfe-a351-37a62f555d18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-531458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-531458
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.67s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-444656 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-444656 --network=: (31.52205087s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-444656" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-444656
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-444656: (2.127340255s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.67s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.39s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-208489 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-208489 --network=bridge: (20.404981446s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-208489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-208489
E1123 08:28:14.929666   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-208489: (1.96724361s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.39s)

                                                
                                    
x
+
TestKicExistingNetwork (24.39s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1123 08:28:16.073706   17442 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1123 08:28:16.093332   17442 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1123 08:28:16.093415   17442 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1123 08:28:16.093472   17442 cli_runner.go:164] Run: docker network inspect existing-network
W1123 08:28:16.110722   17442 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1123 08:28:16.110751   17442 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1123 08:28:16.110773   17442 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1123 08:28:16.110904   17442 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1123 08:28:16.127816   17442 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5d8b9fdde185 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:76:1f:2b:8a:58:68} reservation:<nil>}
I1123 08:28:16.128165   17442 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bd9630}
I1123 08:28:16.128189   17442 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1123 08:28:16.128240   17442 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1123 08:28:16.174844   17442 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-879404 --network=existing-network
E1123 08:28:20.727324   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-879404 --network=existing-network: (22.248521149s)
helpers_test.go:175: Cleaning up "existing-network-879404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-879404
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-879404: (2.007037697s)
I1123 08:28:40.447984   17442 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.39s)

                                                
                                    
x
+
TestKicCustomSubnet (25.88s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-413852 --subnet=192.168.60.0/24
E1123 08:28:48.431542   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-413852 --subnet=192.168.60.0/24: (23.737979244s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-413852 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-413852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-413852
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-413852: (2.126253539s)
--- PASS: TestKicCustomSubnet (25.88s)

                                                
                                    
x
+
TestKicStaticIP (29.05s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-196048 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-196048 --static-ip=192.168.200.200: (26.743066191s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-196048 ip
helpers_test.go:175: Cleaning up "static-ip-196048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-196048
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-196048: (2.156417622s)
--- PASS: TestKicStaticIP (29.05s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (52.36s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-972617 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-972617 --driver=docker  --container-runtime=containerd: (23.721433318s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-974667 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-974667 --driver=docker  --container-runtime=containerd: (22.72712676s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-972617
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-974667
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-974667" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-974667
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-974667: (2.329850726s)
helpers_test.go:175: Cleaning up "first-972617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-972617
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-972617: (2.335705223s)
--- PASS: TestMinikubeProfile (52.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-329400 --memory=3072 --mount-string /tmp/TestMountStartserial479297987/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-329400 --memory=3072 --mount-string /tmp/TestMountStartserial479297987/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.33418144s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-329400 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-341130 --memory=3072 --mount-string /tmp/TestMountStartserial479297987/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-341130 --memory=3072 --mount-string /tmp/TestMountStartserial479297987/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.456778228s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-341130 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-329400 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-329400 --alsologtostderr -v=5: (1.657306741s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-341130 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-341130
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-341130: (1.256078908s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.35s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-341130
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-341130: (6.348635007s)
--- PASS: TestMountStart/serial/RestartStopped (7.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-341130 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-709206 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-709206 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m3.978341578s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709206 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709206 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-709206 -- rollout status deployment/busybox: (3.27170085s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709206 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709206 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709206 -- exec busybox-7b57f96db7-48q8t -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709206 -- exec busybox-7b57f96db7-klvl7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709206 -- exec busybox-7b57f96db7-48q8t -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709206 -- exec busybox-7b57f96db7-klvl7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709206 -- exec busybox-7b57f96db7-48q8t -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709206 -- exec busybox-7b57f96db7-klvl7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.79s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709206 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709206 -- exec busybox-7b57f96db7-48q8t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709206 -- exec busybox-7b57f96db7-48q8t -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709206 -- exec busybox-7b57f96db7-klvl7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709206 -- exec busybox-7b57f96db7-klvl7 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-709206 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-709206 -v=5 --alsologtostderr: (22.63404567s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.28s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-709206 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 cp testdata/cp-test.txt multinode-709206:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 ssh -n multinode-709206 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 cp multinode-709206:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3122241258/001/cp-test_multinode-709206.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 ssh -n multinode-709206 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 cp multinode-709206:/home/docker/cp-test.txt multinode-709206-m02:/home/docker/cp-test_multinode-709206_multinode-709206-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 ssh -n multinode-709206 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 ssh -n multinode-709206-m02 "sudo cat /home/docker/cp-test_multinode-709206_multinode-709206-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 cp multinode-709206:/home/docker/cp-test.txt multinode-709206-m03:/home/docker/cp-test_multinode-709206_multinode-709206-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 ssh -n multinode-709206 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 ssh -n multinode-709206-m03 "sudo cat /home/docker/cp-test_multinode-709206_multinode-709206-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 cp testdata/cp-test.txt multinode-709206-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 ssh -n multinode-709206-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 cp multinode-709206-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3122241258/001/cp-test_multinode-709206-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 ssh -n multinode-709206-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 cp multinode-709206-m02:/home/docker/cp-test.txt multinode-709206:/home/docker/cp-test_multinode-709206-m02_multinode-709206.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 ssh -n multinode-709206-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 ssh -n multinode-709206 "sudo cat /home/docker/cp-test_multinode-709206-m02_multinode-709206.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 cp multinode-709206-m02:/home/docker/cp-test.txt multinode-709206-m03:/home/docker/cp-test_multinode-709206-m02_multinode-709206-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 ssh -n multinode-709206-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 ssh -n multinode-709206-m03 "sudo cat /home/docker/cp-test_multinode-709206-m02_multinode-709206-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 cp testdata/cp-test.txt multinode-709206-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 ssh -n multinode-709206-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 cp multinode-709206-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3122241258/001/cp-test_multinode-709206-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 ssh -n multinode-709206-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 cp multinode-709206-m03:/home/docker/cp-test.txt multinode-709206:/home/docker/cp-test_multinode-709206-m03_multinode-709206.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 ssh -n multinode-709206-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 ssh -n multinode-709206 "sudo cat /home/docker/cp-test_multinode-709206-m03_multinode-709206.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 cp multinode-709206-m03:/home/docker/cp-test.txt multinode-709206-m02:/home/docker/cp-test_multinode-709206-m03_multinode-709206-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 ssh -n multinode-709206-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 ssh -n multinode-709206-m02 "sudo cat /home/docker/cp-test_multinode-709206-m03_multinode-709206-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-709206 node stop m03: (1.272153115s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-709206 status: exit status 7 (506.842355ms)

                                                
                                                
-- stdout --
	multinode-709206
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-709206-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-709206-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-709206 status --alsologtostderr: exit status 7 (507.071184ms)

                                                
                                                
-- stdout --
	multinode-709206
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-709206-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-709206-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:32:41.657115  165664 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:32:41.657354  165664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:32:41.657362  165664 out.go:374] Setting ErrFile to fd 2...
	I1123 08:32:41.657366  165664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:32:41.657582  165664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
	I1123 08:32:41.657765  165664 out.go:368] Setting JSON to false
	I1123 08:32:41.657802  165664 mustload.go:66] Loading cluster: multinode-709206
	I1123 08:32:41.657906  165664 notify.go:221] Checking for updates...
	I1123 08:32:41.658180  165664 config.go:182] Loaded profile config "multinode-709206": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:32:41.658201  165664 status.go:174] checking status of multinode-709206 ...
	I1123 08:32:41.658630  165664 cli_runner.go:164] Run: docker container inspect multinode-709206 --format={{.State.Status}}
	I1123 08:32:41.680310  165664 status.go:371] multinode-709206 host status = "Running" (err=<nil>)
	I1123 08:32:41.680332  165664 host.go:66] Checking if "multinode-709206" exists ...
	I1123 08:32:41.680688  165664 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-709206
	I1123 08:32:41.698448  165664 host.go:66] Checking if "multinode-709206" exists ...
	I1123 08:32:41.698744  165664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:32:41.698786  165664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-709206
	I1123 08:32:41.716772  165664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/multinode-709206/id_rsa Username:docker}
	I1123 08:32:41.815253  165664 ssh_runner.go:195] Run: systemctl --version
	I1123 08:32:41.821895  165664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:32:41.834408  165664 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:32:41.893960  165664 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-23 08:32:41.884739015 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:32:41.894476  165664 kubeconfig.go:125] found "multinode-709206" server: "https://192.168.67.2:8443"
	I1123 08:32:41.894502  165664 api_server.go:166] Checking apiserver status ...
	I1123 08:32:41.894532  165664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:32:41.906317  165664 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1339/cgroup
	W1123 08:32:41.914713  165664 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1339/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:32:41.914759  165664 ssh_runner.go:195] Run: ls
	I1123 08:32:41.918437  165664 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1123 08:32:41.922440  165664 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1123 08:32:41.922463  165664 status.go:463] multinode-709206 apiserver status = Running (err=<nil>)
	I1123 08:32:41.922480  165664 status.go:176] multinode-709206 status: &{Name:multinode-709206 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:32:41.922501  165664 status.go:174] checking status of multinode-709206-m02 ...
	I1123 08:32:41.922767  165664 cli_runner.go:164] Run: docker container inspect multinode-709206-m02 --format={{.State.Status}}
	I1123 08:32:41.940407  165664 status.go:371] multinode-709206-m02 host status = "Running" (err=<nil>)
	I1123 08:32:41.940465  165664 host.go:66] Checking if "multinode-709206-m02" exists ...
	I1123 08:32:41.940760  165664 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-709206-m02
	I1123 08:32:41.959021  165664 host.go:66] Checking if "multinode-709206-m02" exists ...
	I1123 08:32:41.959272  165664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:32:41.959332  165664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-709206-m02
	I1123 08:32:41.977115  165664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21969-13876/.minikube/machines/multinode-709206-m02/id_rsa Username:docker}
	I1123 08:32:42.074735  165664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:32:42.086358  165664 status.go:176] multinode-709206-m02 status: &{Name:multinode-709206-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:32:42.086392  165664 status.go:174] checking status of multinode-709206-m03 ...
	I1123 08:32:42.086710  165664 cli_runner.go:164] Run: docker container inspect multinode-709206-m03 --format={{.State.Status}}
	I1123 08:32:42.105100  165664 status.go:371] multinode-709206-m03 host status = "Stopped" (err=<nil>)
	I1123 08:32:42.105121  165664 status.go:384] host is not running, skipping remaining checks
	I1123 08:32:42.105129  165664 status.go:176] multinode-709206-m03 status: &{Name:multinode-709206-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-709206 node start m03 -v=5 --alsologtostderr: (6.167609098s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-709206
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-709206
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-709206: (25.036751756s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-709206 --wait=true -v=5 --alsologtostderr
E1123 08:33:14.930172   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:33:20.724164   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-709206 --wait=true -v=5 --alsologtostderr: (50.893364747s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-709206
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.05s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-709206 node delete m03: (4.678326319s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-709206 stop: (23.856632002s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-709206 status: exit status 7 (99.251936ms)

                                                
                                                
-- stdout --
	multinode-709206
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-709206-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-709206 status --alsologtostderr: exit status 7 (98.754624ms)

                                                
                                                
-- stdout --
	multinode-709206
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-709206-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:34:34.339502  175468 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:34:34.339598  175468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:34:34.339604  175468 out.go:374] Setting ErrFile to fd 2...
	I1123 08:34:34.339610  175468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:34:34.339864  175468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
	I1123 08:34:34.340049  175468 out.go:368] Setting JSON to false
	I1123 08:34:34.340079  175468 mustload.go:66] Loading cluster: multinode-709206
	I1123 08:34:34.340206  175468 notify.go:221] Checking for updates...
	I1123 08:34:34.340923  175468 config.go:182] Loaded profile config "multinode-709206": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:34:34.340967  175468 status.go:174] checking status of multinode-709206 ...
	I1123 08:34:34.342197  175468 cli_runner.go:164] Run: docker container inspect multinode-709206 --format={{.State.Status}}
	I1123 08:34:34.362308  175468 status.go:371] multinode-709206 host status = "Stopped" (err=<nil>)
	I1123 08:34:34.362357  175468 status.go:384] host is not running, skipping remaining checks
	I1123 08:34:34.362375  175468 status.go:176] multinode-709206 status: &{Name:multinode-709206 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:34:34.362412  175468 status.go:174] checking status of multinode-709206-m02 ...
	I1123 08:34:34.362684  175468 cli_runner.go:164] Run: docker container inspect multinode-709206-m02 --format={{.State.Status}}
	I1123 08:34:34.381188  175468 status.go:371] multinode-709206-m02 host status = "Stopped" (err=<nil>)
	I1123 08:34:34.381214  175468 status.go:384] host is not running, skipping remaining checks
	I1123 08:34:34.381222  175468 status.go:176] multinode-709206-m02 status: &{Name:multinode-709206-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-709206 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1123 08:34:37.998381   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-709206 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (49.148504889s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709206 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.75s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-709206
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-709206-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-709206-m02 --driver=docker  --container-runtime=containerd: exit status 14 (76.183308ms)

                                                
                                                
-- stdout --
	* [multinode-709206-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-13876/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-13876/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-709206-m02' is duplicated with machine name 'multinode-709206-m02' in profile 'multinode-709206'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-709206-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-709206-m03 --driver=docker  --container-runtime=containerd: (20.412208804s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-709206
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-709206: exit status 80 (296.4279ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-709206 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-709206-m03 already exists in multinode-709206-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-709206-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-709206-m03: (1.944192264s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.79s)

                                                
                                    
x
+
TestPreload (108.87s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-919193 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-919193 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (43.695278987s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-919193 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-919193 image pull gcr.io/k8s-minikube/busybox: (2.423620266s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-919193
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-919193: (5.719068464s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-919193 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-919193 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (54.376908958s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-919193 image list
helpers_test.go:175: Cleaning up "test-preload-919193" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-919193
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-919193: (2.423388598s)
--- PASS: TestPreload (108.87s)

                                                
                                    
x
+
TestScheduledStopUnix (99.31s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-865041 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-865041 --memory=3072 --driver=docker  --container-runtime=containerd: (23.310939769s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-865041 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 08:38:03.395062  193686 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:38:03.395317  193686 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:38:03.395326  193686 out.go:374] Setting ErrFile to fd 2...
	I1123 08:38:03.395330  193686 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:38:03.395517  193686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
	I1123 08:38:03.395812  193686 out.go:368] Setting JSON to false
	I1123 08:38:03.395902  193686 mustload.go:66] Loading cluster: scheduled-stop-865041
	I1123 08:38:03.396218  193686 config.go:182] Loaded profile config "scheduled-stop-865041": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:38:03.396276  193686 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/scheduled-stop-865041/config.json ...
	I1123 08:38:03.396431  193686 mustload.go:66] Loading cluster: scheduled-stop-865041
	I1123 08:38:03.396518  193686 config.go:182] Loaded profile config "scheduled-stop-865041": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-865041 -n scheduled-stop-865041
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-865041 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 08:38:03.788905  193833 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:38:03.789155  193833 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:38:03.789163  193833 out.go:374] Setting ErrFile to fd 2...
	I1123 08:38:03.789167  193833 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:38:03.789348  193833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
	I1123 08:38:03.789565  193833 out.go:368] Setting JSON to false
	I1123 08:38:03.789783  193833 daemonize_unix.go:73] killing process 193720 as it is an old scheduled stop
	I1123 08:38:03.789896  193833 mustload.go:66] Loading cluster: scheduled-stop-865041
	I1123 08:38:03.790263  193833 config.go:182] Loaded profile config "scheduled-stop-865041": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:38:03.790328  193833 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/scheduled-stop-865041/config.json ...
	I1123 08:38:03.790535  193833 mustload.go:66] Loading cluster: scheduled-stop-865041
	I1123 08:38:03.790635  193833 config.go:182] Loaded profile config "scheduled-stop-865041": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1123 08:38:03.794557   17442 retry.go:31] will retry after 106.33µs: open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/scheduled-stop-865041/pid: no such file or directory
I1123 08:38:03.795712   17442 retry.go:31] will retry after 113.352µs: open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/scheduled-stop-865041/pid: no such file or directory
I1123 08:38:03.796846   17442 retry.go:31] will retry after 150.636µs: open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/scheduled-stop-865041/pid: no such file or directory
I1123 08:38:03.797973   17442 retry.go:31] will retry after 274.162µs: open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/scheduled-stop-865041/pid: no such file or directory
I1123 08:38:03.799112   17442 retry.go:31] will retry after 616.43µs: open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/scheduled-stop-865041/pid: no such file or directory
I1123 08:38:03.801338   17442 retry.go:31] will retry after 809.707µs: open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/scheduled-stop-865041/pid: no such file or directory
I1123 08:38:03.802470   17442 retry.go:31] will retry after 1.475495ms: open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/scheduled-stop-865041/pid: no such file or directory
I1123 08:38:03.804703   17442 retry.go:31] will retry after 1.590701ms: open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/scheduled-stop-865041/pid: no such file or directory
I1123 08:38:03.806937   17442 retry.go:31] will retry after 1.948331ms: open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/scheduled-stop-865041/pid: no such file or directory
I1123 08:38:03.809154   17442 retry.go:31] will retry after 5.204489ms: open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/scheduled-stop-865041/pid: no such file or directory
I1123 08:38:03.815361   17442 retry.go:31] will retry after 4.647706ms: open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/scheduled-stop-865041/pid: no such file or directory
I1123 08:38:03.820558   17442 retry.go:31] will retry after 7.441743ms: open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/scheduled-stop-865041/pid: no such file or directory
I1123 08:38:03.828809   17442 retry.go:31] will retry after 17.310756ms: open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/scheduled-stop-865041/pid: no such file or directory
I1123 08:38:03.847133   17442 retry.go:31] will retry after 16.684014ms: open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/scheduled-stop-865041/pid: no such file or directory
I1123 08:38:03.864415   17442 retry.go:31] will retry after 41.930885ms: open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/scheduled-stop-865041/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-865041 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1123 08:38:14.929328   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:38:20.728506   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-865041 -n scheduled-stop-865041
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-865041
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-865041 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 08:38:29.655812  194706 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:38:29.656038  194706 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:38:29.656050  194706 out.go:374] Setting ErrFile to fd 2...
	I1123 08:38:29.656056  194706 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:38:29.656282  194706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
	I1123 08:38:29.656544  194706 out.go:368] Setting JSON to false
	I1123 08:38:29.656624  194706 mustload.go:66] Loading cluster: scheduled-stop-865041
	I1123 08:38:29.656942  194706 config.go:182] Loaded profile config "scheduled-stop-865041": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:38:29.657008  194706 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/scheduled-stop-865041/config.json ...
	I1123 08:38:29.657189  194706 mustload.go:66] Loading cluster: scheduled-stop-865041
	I1123 08:38:29.657279  194706 config.go:182] Loaded profile config "scheduled-stop-865041": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-865041
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-865041: exit status 7 (80.023871ms)

                                                
                                                
-- stdout --
	scheduled-stop-865041
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-865041 -n scheduled-stop-865041
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-865041 -n scheduled-stop-865041: exit status 7 (78.413145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-865041" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-865041
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-865041: (4.512232213s)
--- PASS: TestScheduledStopUnix (99.31s)

                                                
                                    
x
+
TestInsufficientStorage (12.1s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-517399 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-517399 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.634032048s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5024eb4c-4cbe-4699-a372-fbb849593be7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-517399] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c45dc24a-1aa6-4798-9ffb-6e22af5033dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21969"}}
	{"specversion":"1.0","id":"46b6c080-87e2-4fe2-beac-be8f6255e1b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7099b0a6-00f3-433a-b217-19973b52a4ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21969-13876/kubeconfig"}}
	{"specversion":"1.0","id":"ceb58733-c999-4461-b048-33fc93a947e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-13876/.minikube"}}
	{"specversion":"1.0","id":"c24ddafb-b36c-4871-b79b-e24898ff4d43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"bb85c3fb-43ed-4703-8541-cc7c57c8b6ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"898643a3-2348-4d7c-b893-10fe26151871","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d4a64dcd-5c90-4466-b9a9-aaa66d6fb298","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3f6f4f21-97ae-470a-8f5a-a1d223616c21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"aca05d4c-2ee6-4770-885b-372587d2171c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"903979e9-f6e6-4528-a25f-5c91f51e61c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-517399\" primary control-plane node in \"insufficient-storage-517399\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5d33d908-d9be-43a8-adc8-8d3c88ad832a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"026fb0ef-91c3-49f7-b8c0-a31dadb631aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e7ff0c3-2679-44d7-b78d-5e0e575c4c42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-517399 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-517399 --output=json --layout=cluster: exit status 7 (293.575854ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-517399","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-517399","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 08:39:29.253022  196974 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-517399" does not appear in /home/jenkins/minikube-integration/21969-13876/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-517399 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-517399 --output=json --layout=cluster: exit status 7 (289.735001ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-517399","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-517399","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 08:39:29.542901  197082 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-517399" does not appear in /home/jenkins/minikube-integration/21969-13876/kubeconfig
	E1123 08:39:29.553283  197082 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/insufficient-storage-517399/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-517399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-517399
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-517399: (1.882661884s)
--- PASS: TestInsufficientStorage (12.10s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (92.58s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1258096035 start -p running-upgrade-743237 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1123 08:39:43.793197   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1258096035 start -p running-upgrade-743237 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (1m5.70859838s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-743237 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-743237 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (22.232923216s)
helpers_test.go:175: Cleaning up "running-upgrade-743237" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-743237
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-743237: (1.989080791s)
--- PASS: TestRunningBinaryUpgrade (92.58s)

                                                
                                    
x
+
TestKubernetesUpgrade (339.07s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-776670 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-776670 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (34.324971657s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-776670
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-776670: (6.378585746s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-776670 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-776670 status --format={{.Host}}: exit status 7 (135.462595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-776670 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-776670 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m37.335994885s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-776670 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-776670 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-776670 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (84.213628ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-776670] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-13876/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-13876/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-776670
	    minikube start -p kubernetes-upgrade-776670 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7766702 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-776670 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-776670 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-776670 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (17.93209541s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-776670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-776670
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-776670: (2.811700059s)
--- PASS: TestKubernetesUpgrade (339.07s)

                                                
                                    
x
+
TestMissingContainerUpgrade (83.3s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2197446591 start -p missing-upgrade-231159 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2197446591 start -p missing-upgrade-231159 --memory=3072 --driver=docker  --container-runtime=containerd: (23.442531842s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-231159
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-231159
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-231159 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-231159 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (54.517112853s)
helpers_test.go:175: Cleaning up "missing-upgrade-231159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-231159
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-231159: (2.046486269s)
--- PASS: TestMissingContainerUpgrade (83.30s)

                                                
                                    
x
+
TestPause/serial/Start (83.49s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-267980 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-267980 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m23.489924855s)
--- PASS: TestPause/serial/Start (83.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-846693 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-846693 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (77.88496ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-846693] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-13876/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-13876/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (20.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-846693 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-846693 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (19.942862175s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-846693 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (20.30s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.3s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-267980 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-267980 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.28429875s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (22.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-846693 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-846693 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (20.046792123s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-846693 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-846693 status -o json: exit status 2 (298.709929ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-846693","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-846693
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-846693: (1.976450456s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (22.32s)

                                                
                                    
x
+
TestPause/serial/Pause (0.82s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-267980 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.82s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-267980 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-267980 --output=json --layout=cluster: exit status 2 (356.901714ms)

                                                
                                                
-- stdout --
	{"Name":"pause-267980","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-267980","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-267980 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.68s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-267980 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.68s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.78s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-267980 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-267980 --alsologtostderr -v=5: (2.775148637s)
--- PASS: TestPause/serial/DeletePaused (2.78s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.59s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.540905324s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-267980
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-267980: exit status 1 (17.129998ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-267980: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (15.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-794429 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-794429 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (164.224191ms)

                                                
                                                
-- stdout --
	* [false-794429] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-13876/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-13876/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:41:07.530576  219841 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:41:07.530844  219841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:41:07.530854  219841 out.go:374] Setting ErrFile to fd 2...
	I1123 08:41:07.530858  219841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:41:07.531121  219841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-13876/.minikube/bin
	I1123 08:41:07.531636  219841 out.go:368] Setting JSON to false
	I1123 08:41:07.532691  219841 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5008,"bootTime":1763882259,"procs":265,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:41:07.532751  219841 start.go:143] virtualization: kvm guest
	I1123 08:41:07.534701  219841 out.go:179] * [false-794429] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:41:07.535933  219841 notify.go:221] Checking for updates...
	I1123 08:41:07.535939  219841 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:41:07.537407  219841 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:41:07.539028  219841 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-13876/kubeconfig
	I1123 08:41:07.540280  219841 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-13876/.minikube
	I1123 08:41:07.541694  219841 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:41:07.542953  219841 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:41:07.544780  219841 config.go:182] Loaded profile config "NoKubernetes-846693": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1123 08:41:07.544887  219841 config.go:182] Loaded profile config "kubernetes-upgrade-776670": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:41:07.544976  219841 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:41:07.569594  219841 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:41:07.569693  219841 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:41:07.628621  219841 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-23 08:41:07.617787049 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:41:07.628752  219841 docker.go:319] overlay module found
	I1123 08:41:07.630606  219841 out.go:179] * Using the docker driver based on user configuration
	I1123 08:41:07.631960  219841 start.go:309] selected driver: docker
	I1123 08:41:07.631976  219841 start.go:927] validating driver "docker" against <nil>
	I1123 08:41:07.631988  219841 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:41:07.633710  219841 out.go:203] 
	W1123 08:41:07.635083  219841 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1123 08:41:07.636297  219841 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-794429 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-794429

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-794429

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-794429

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-794429

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-794429

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-794429

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-794429

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-794429

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-794429

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-794429

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-794429

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-794429" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-794429" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21969-13876/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:40:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-846693
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21969-13876/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:40:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-776670
contexts:
- context:
cluster: NoKubernetes-846693
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:40:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-846693
name: NoKubernetes-846693
- context:
cluster: kubernetes-upgrade-776670
user: kubernetes-upgrade-776670
name: kubernetes-upgrade-776670
current-context: ""
kind: Config
users:
- name: NoKubernetes-846693
user:
client-certificate: /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/NoKubernetes-846693/client.crt
client-key: /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/NoKubernetes-846693/client.key
- name: kubernetes-upgrade-776670
user:
client-certificate: /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/kubernetes-upgrade-776670/client.crt
client-key: /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/kubernetes-upgrade-776670/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-794429

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-794429"

                                                
                                                
----------------------- debugLogs end: false-794429 [took: 3.076912592s] --------------------------------
helpers_test.go:175: Cleaning up "false-794429" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-794429
--- PASS: TestNetworkPlugins/group/false (3.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-846693 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-846693 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (9.20763616s)
--- PASS: TestNoKubernetes/serial/Start (9.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21969-13876/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-846693 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-846693 "sudo systemctl is-active --quiet service kubelet": exit status 1 (337.875927ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (19.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (19.057937629s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (19.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-846693
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-846693: (2.182675104s)
--- PASS: TestNoKubernetes/serial/Stop (2.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-846693 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-846693 --driver=docker  --container-runtime=containerd: (6.999353542s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-846693 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-846693 "sudo systemctl is-active --quiet service kubelet": exit status 1 (315.588241ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (42.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2389134882 start -p stopped-upgrade-595653 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2389134882 start -p stopped-upgrade-595653 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (21.197842143s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2389134882 -p stopped-upgrade-595653 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2389134882 -p stopped-upgrade-595653 stop: (1.229881442s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-595653 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-595653 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (20.058032181s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (42.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-595653
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-595653: (1.196105328s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (51.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-204346 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1123 08:43:14.930012   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:43:20.723632   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-204346 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (51.610375678s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (51.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (52.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-999106 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-999106 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (52.006038563s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (52.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-204346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-204346 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-204346 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-204346 --alsologtostderr -v=3: (12.085204289s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-204346 -n old-k8s-version-204346
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-204346 -n old-k8s-version-204346: exit status 7 (78.440676ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-204346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-204346 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-204346 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (43.864789472s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-204346 -n old-k8s-version-204346
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-999106 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-999106 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-999106 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-999106 --alsologtostderr -v=3: (12.714238447s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-999106 -n no-preload-999106
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-999106 -n no-preload-999106: exit status 7 (93.501725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-999106 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-999106 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-999106 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (49.181512142s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-999106 -n no-preload-999106
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-319770 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-319770 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (45.841579488s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-s6klg" [dd3f64ff-5f11-43ed-984b-e4fb128d3358] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003494409s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-525009 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-525009 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (41.354488893s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-s6klg" [dd3f64ff-5f11-43ed-984b-e4fb128d3358] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004127494s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-204346 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-204346 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-204346 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-204346 -n old-k8s-version-204346
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-204346 -n old-k8s-version-204346: exit status 2 (356.533589ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-204346 -n old-k8s-version-204346
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-204346 -n old-k8s-version-204346: exit status 2 (355.182046ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-204346 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-204346 -n old-k8s-version-204346
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-204346 -n old-k8s-version-204346
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (31.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-399335 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-399335 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (31.11041517s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (31.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mxcp8" [37e5ec66-4448-4a3a-b9b2-2f5299db7e39] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003311564s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mxcp8" [37e5ec66-4448-4a3a-b9b2-2f5299db7e39] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003422194s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-999106 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-999106 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-999106 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-999106 -n no-preload-999106
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-999106 -n no-preload-999106: exit status 2 (324.767212ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-999106 -n no-preload-999106
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-999106 -n no-preload-999106: exit status 2 (330.95134ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-999106 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-999106 -n no-preload-999106
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-999106 -n no-preload-999106
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (70.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-794429 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-794429 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m10.771641932s)
--- PASS: TestNetworkPlugins/group/auto/Start (70.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-399335 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-399335 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-399335 --alsologtostderr -v=3: (1.429331249s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-399335 -n newest-cni-399335
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-399335 -n newest-cni-399335: exit status 7 (91.3179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-399335 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-399335 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-399335 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (12.944863742s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-399335 -n newest-cni-399335
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-319770 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-319770 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.106779386s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-319770 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-525009 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-525009 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.227662808s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-525009 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-319770 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-319770 --alsologtostderr -v=3: (13.160685931s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-525009 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-525009 --alsologtostderr -v=3: (12.763187358s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-399335 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-399335 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-399335 -n newest-cni-399335
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-399335 -n newest-cni-399335: exit status 2 (335.483493ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-399335 -n newest-cni-399335
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-399335 -n newest-cni-399335: exit status 2 (326.946271ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-399335 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-399335 -n newest-cni-399335
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-399335 -n newest-cni-399335
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (44.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-794429 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-794429 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (44.678330346s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (44.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-525009 -n default-k8s-diff-port-525009
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-525009 -n default-k8s-diff-port-525009: exit status 7 (79.704381ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-525009 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-525009 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-525009 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (45.888499977s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-525009 -n default-k8s-diff-port-525009
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-319770 -n embed-certs-319770
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-319770 -n embed-certs-319770: exit status 7 (101.143332ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-319770 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-319770 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-319770 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (51.054856984s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-319770 -n embed-certs-319770
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-794429 "pgrep -a kubelet"
I1123 08:47:02.997275   17442 config.go:182] Loaded profile config "auto-794429": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-794429 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2hljm" [3784a421-2b79-4aaa-b695-cce076b644c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2hljm" [3784a421-2b79-4aaa-b695-cce076b644c6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004230311s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-dqcxx" [b7d69750-51a7-43b4-a33b-ef007647d9d0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003236883s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-794429 "pgrep -a kubelet"
I1123 08:47:11.864053   17442 config.go:182] Loaded profile config "kindnet-794429": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-794429 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qcpl6" [63622299-2ec3-4033-a296-f15dec311f28] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qcpl6" [63622299-2ec3-4033-a296-f15dec311f28] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003008948s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-794429 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-794429 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-794429 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-97nvd" [5d3a36c5-23ce-4259-878f-f1e117811385] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004106008s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d82k8" [06a675c8-6779-4d16-85a8-ecef6037bd23] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003870349s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-97nvd" [5d3a36c5-23ce-4259-878f-f1e117811385] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003270887s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-525009 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-794429 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-794429 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-794429 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-525009 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-525009 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-525009 -n default-k8s-diff-port-525009
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-525009 -n default-k8s-diff-port-525009: exit status 2 (344.781021ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-525009 -n default-k8s-diff-port-525009
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-525009 -n default-k8s-diff-port-525009: exit status 2 (347.178838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-525009 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-525009 -n default-k8s-diff-port-525009
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-525009 -n default-k8s-diff-port-525009
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d82k8" [06a675c8-6779-4d16-85a8-ecef6037bd23] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004158517s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-319770 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-319770 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-319770 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-319770 --alsologtostderr -v=1: (1.001233292s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-319770 -n embed-certs-319770
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-319770 -n embed-certs-319770: exit status 2 (430.671188ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-319770 -n embed-certs-319770
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-319770 -n embed-certs-319770: exit status 2 (378.731383ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-319770 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-319770 -n embed-certs-319770
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-319770 -n embed-certs-319770
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.58s)
E1123 08:48:59.865764   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/old-k8s-version-204346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:48:59.872193   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/old-k8s-version-204346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:48:59.883650   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/old-k8s-version-204346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:48:59.905023   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/old-k8s-version-204346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:48:59.946438   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/old-k8s-version-204346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:49:00.027843   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/old-k8s-version-204346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:49:00.189693   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/old-k8s-version-204346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (52.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-794429 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-794429 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (52.950922798s)
--- PASS: TestNetworkPlugins/group/calico/Start (52.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-794429 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-794429 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (51.743743938s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (40.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-794429 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-794429 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (40.054225065s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (40.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-794429 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1123 08:48:14.929292   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/addons-963149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-794429 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (59.305828241s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-794429 "pgrep -a kubelet"
I1123 08:48:17.020945   17442 config.go:182] Loaded profile config "enable-default-cni-794429": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-794429 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-r7sn5" [153cd390-fb22-4dc7-9d1f-aad22b6ec188] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-r7sn5" [153cd390-fb22-4dc7-9d1f-aad22b6ec188] Running
E1123 08:48:20.723897   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/functional-614508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.003762609s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-6q4cr" [f08de8cc-4b46-40ca-84f2-2d061540a652] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-6q4cr" [f08de8cc-4b46-40ca-84f2-2d061540a652] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004164725s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-794429 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-794429 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-794429 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-794429 "pgrep -a kubelet"
I1123 08:48:27.049877   17442 config.go:182] Loaded profile config "custom-flannel-794429": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-794429 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cbnng" [40d230f1-6c15-48d2-ac56-76b98618d627] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cbnng" [40d230f1-6c15-48d2-ac56-76b98618d627] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003671154s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-794429 "pgrep -a kubelet"
I1123 08:48:29.603723   17442 config.go:182] Loaded profile config "calico-794429": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-794429 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rk6qk" [3aeb58db-25de-4c9d-96ff-56e83cc6d02e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rk6qk" [3aeb58db-25de-4c9d-96ff-56e83cc6d02e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.003541931s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-794429 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-794429 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-794429 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-794429 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-794429 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-794429 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-snr8d" [7c47263c-564a-4ca6-b65c-3de8797095d1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.008546941s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (63.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-794429 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-794429 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m3.095699713s)
--- PASS: TestNetworkPlugins/group/bridge/Start (63.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-794429 "pgrep -a kubelet"
I1123 08:48:50.153389   17442 config.go:182] Loaded profile config "flannel-794429": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-794429 replace --force -f testdata/netcat-deployment.yaml
I1123 08:48:50.822418   17442 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1123 08:48:51.106527   17442 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hgssq" [97b1f9bd-ad00-46f0-958f-c4ec50883693] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hgssq" [97b1f9bd-ad00-46f0-958f-c4ec50883693] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.00470934s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-794429 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-794429 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-794429 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1123 08:49:00.512203   17442 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/old-k8s-version-204346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-794429 "pgrep -a kubelet"
I1123 08:49:48.995923   17442 config.go:182] Loaded profile config "bridge-794429": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-794429 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f4tvk" [3849797a-d274-4824-8d6a-e6b63bb7df45] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-f4tvk" [3849797a-d274-4824-8d6a-e6b63bb7df45] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003502553s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-794429 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-794429 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-794429 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    

Test skip (26/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-445958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-445958
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-794429 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-794429

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-794429

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-794429

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-794429

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-794429

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-794429

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-794429

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-794429

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-794429

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-794429

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-794429

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-794429" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-794429" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21969-13876/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:40:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-846693
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21969-13876/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:40:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-776670
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21969-13876/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:40:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: pause-267980
contexts:
- context:
cluster: NoKubernetes-846693
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:40:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-846693
name: NoKubernetes-846693
- context:
cluster: kubernetes-upgrade-776670
user: kubernetes-upgrade-776670
name: kubernetes-upgrade-776670
- context:
cluster: pause-267980
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:40:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-267980
name: pause-267980
current-context: ""
kind: Config
users:
- name: NoKubernetes-846693
user:
client-certificate: /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/NoKubernetes-846693/client.crt
client-key: /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/NoKubernetes-846693/client.key
- name: kubernetes-upgrade-776670
user:
client-certificate: /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/kubernetes-upgrade-776670/client.crt
client-key: /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/kubernetes-upgrade-776670/client.key
- name: pause-267980
user:
client-certificate: /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/pause-267980/client.crt
client-key: /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/pause-267980/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-794429

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-794429"

                                                
                                                
----------------------- debugLogs end: kubenet-794429 [took: 3.303332228s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-794429" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-794429
--- SKIP: TestNetworkPlugins/group/kubenet (3.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-794429 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-794429

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-794429

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-794429

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-794429

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-794429

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-794429

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-794429

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-794429

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-794429

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-794429

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-794429

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-794429" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-794429

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-794429

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-794429

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-794429

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-794429" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-794429" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21969-13876/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:40:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-846693
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21969-13876/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:40:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-776670
contexts:
- context:
cluster: NoKubernetes-846693
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:40:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-846693
name: NoKubernetes-846693
- context:
cluster: kubernetes-upgrade-776670
user: kubernetes-upgrade-776670
name: kubernetes-upgrade-776670
current-context: ""
kind: Config
users:
- name: NoKubernetes-846693
user:
client-certificate: /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/NoKubernetes-846693/client.crt
client-key: /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/NoKubernetes-846693/client.key
- name: kubernetes-upgrade-776670
user:
client-certificate: /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/kubernetes-upgrade-776670/client.crt
client-key: /home/jenkins/minikube-integration/21969-13876/.minikube/profiles/kubernetes-upgrade-776670/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-794429

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-794429" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-794429"

                                                
                                                
----------------------- debugLogs end: cilium-794429 [took: 3.37463318s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-794429" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-794429
--- SKIP: TestNetworkPlugins/group/cilium (3.53s)

                                                
                                    
Copied to clipboard