Test Report: Docker_Linux_containerd 21932

                    
                      84a896b9ca11c6987b6528b1f6e82b411b2540e2:2025-11-24:42492
                    
                

Test fail (4/333)

Order failed test Duration
303 TestStartStop/group/old-k8s-version/serial/DeployApp 14.74
304 TestStartStop/group/no-preload/serial/DeployApp 13.86
330 TestStartStop/group/embed-certs/serial/DeployApp 14.54
334 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 15.6
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (14.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-513442 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e21ee73b-578f-48c9-826d-ab3b4bbb7871] Pending
helpers_test.go:352: "busybox" [e21ee73b-578f-48c9-826d-ab3b4bbb7871] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e21ee73b-578f-48c9-826d-ab3b4bbb7871] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003551417s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-513442 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-513442
helpers_test.go:243: (dbg) docker inspect old-k8s-version-513442:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13426d2cf76c27dd9f2a390d750a5229384c014f5a7850e15adbf074b454afbc",
	        "Created": "2025-11-24T13:47:35.092444426Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 609088,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:47:35.135903717Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/13426d2cf76c27dd9f2a390d750a5229384c014f5a7850e15adbf074b454afbc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13426d2cf76c27dd9f2a390d750a5229384c014f5a7850e15adbf074b454afbc/hostname",
	        "HostsPath": "/var/lib/docker/containers/13426d2cf76c27dd9f2a390d750a5229384c014f5a7850e15adbf074b454afbc/hosts",
	        "LogPath": "/var/lib/docker/containers/13426d2cf76c27dd9f2a390d750a5229384c014f5a7850e15adbf074b454afbc/13426d2cf76c27dd9f2a390d750a5229384c014f5a7850e15adbf074b454afbc-json.log",
	        "Name": "/old-k8s-version-513442",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-513442:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-513442",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "13426d2cf76c27dd9f2a390d750a5229384c014f5a7850e15adbf074b454afbc",
	                "LowerDir": "/var/lib/docker/overlay2/bd85d41ae72067109a66add256d4bca169e9772c5d88f4cadf18fe98e5e00338-init/diff:/var/lib/docker/overlay2/0f013e03fd0eaee4efc608fb0376e7d3e8ba628388f5191310c2259ab273ad26/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bd85d41ae72067109a66add256d4bca169e9772c5d88f4cadf18fe98e5e00338/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bd85d41ae72067109a66add256d4bca169e9772c5d88f4cadf18fe98e5e00338/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bd85d41ae72067109a66add256d4bca169e9772c5d88f4cadf18fe98e5e00338/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-513442",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-513442/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-513442",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-513442",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-513442",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "712b075dd23c6c1fbc5bbaa3b37767187ba4a40be8134789ce23d7e72a4abc25",
	            "SandboxKey": "/var/run/docker/netns/712b075dd23c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-513442": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "57f535f2d59b940a7e2130a9a6bcf664e3f052e878c97575bfeea5e13ed58e73",
	                    "EndpointID": "439facefab95f9d1822733d1b1004570b6d417a88dc9a1ee26ae6d774889308f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "46:21:b5:12:37:e7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-513442",
	                        "13426d2cf76c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-513442 -n old-k8s-version-513442
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-513442 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-513442 logs -n 25: (1.2175157s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-355661 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ ssh     │ -p cilium-355661 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ ssh     │ -p cilium-355661 sudo containerd config dump                                                                                                                                                                                                        │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ ssh     │ -p cilium-355661 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ ssh     │ -p cilium-355661 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ start   │ -p NoKubernetes-787855 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                         │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │ 24 Nov 25 13:46 UTC │
	│ ssh     │ -p cilium-355661 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ ssh     │ -p cilium-355661 sudo crio config                                                                                                                                                                                                                   │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ delete  │ -p cilium-355661                                                                                                                                                                                                                                    │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │ 24 Nov 25 13:46 UTC │
	│ start   │ -p force-systemd-flag-775412 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-775412 │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │ 24 Nov 25 13:47 UTC │
	│ delete  │ -p NoKubernetes-787855                                                                                                                                                                                                                              │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │ 24 Nov 25 13:46 UTC │
	│ start   │ -p NoKubernetes-787855 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                         │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │ 24 Nov 25 13:47 UTC │
	│ ssh     │ force-systemd-flag-775412 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-775412 │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ delete  │ -p force-systemd-flag-775412                                                                                                                                                                                                                        │ force-systemd-flag-775412 │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ ssh     │ -p NoKubernetes-787855 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │                     │
	│ start   │ -p cert-options-342221 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-342221       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ stop    │ -p NoKubernetes-787855                                                                                                                                                                                                                              │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ start   │ -p NoKubernetes-787855 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ ssh     │ cert-options-342221 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-342221       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ ssh     │ -p cert-options-342221 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-342221       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ delete  │ -p cert-options-342221                                                                                                                                                                                                                              │ cert-options-342221       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ start   │ -p old-k8s-version-513442 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-513442    │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:48 UTC │
	│ ssh     │ -p NoKubernetes-787855 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │                     │
	│ delete  │ -p NoKubernetes-787855                                                                                                                                                                                                                              │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ start   │ -p no-preload-608395 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-608395         │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:48 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:47:35
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:47:35.072446  608917 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:47:35.072749  608917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:47:35.072763  608917 out.go:374] Setting ErrFile to fd 2...
	I1124 13:47:35.072768  608917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:47:35.073046  608917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
	I1124 13:47:35.073526  608917 out.go:368] Setting JSON to false
	I1124 13:47:35.074857  608917 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8994,"bootTime":1763983061,"procs":340,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:47:35.074959  608917 start.go:143] virtualization: kvm guest
	I1124 13:47:35.077490  608917 out.go:179] * [no-preload-608395] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:47:35.079255  608917 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:47:35.079255  608917 notify.go:221] Checking for updates...
	I1124 13:47:35.080776  608917 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:47:35.082396  608917 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:47:35.083932  608917 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-370498/.minikube
	I1124 13:47:35.085251  608917 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:47:35.086603  608917 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:47:35.089427  608917 config.go:182] Loaded profile config "cert-expiration-099863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:47:35.089575  608917 config.go:182] Loaded profile config "kubernetes-upgrade-358357": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:47:35.089706  608917 config.go:182] Loaded profile config "old-k8s-version-513442": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 13:47:35.089837  608917 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:47:35.114581  608917 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:47:35.114769  608917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:47:35.180508  608917 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:58 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 13:47:35.169616068 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:47:35.180627  608917 docker.go:319] overlay module found
	I1124 13:47:35.182258  608917 out.go:179] * Using the docker driver based on user configuration
	I1124 13:47:35.183642  608917 start.go:309] selected driver: docker
	I1124 13:47:35.183663  608917 start.go:927] validating driver "docker" against <nil>
	I1124 13:47:35.183675  608917 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:47:35.184437  608917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:47:35.249663  608917 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:58 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 13:47:35.237755455 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:47:35.249975  608917 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:47:35.250402  608917 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:47:35.252318  608917 out.go:179] * Using Docker driver with root privileges
	I1124 13:47:35.254354  608917 cni.go:84] Creating CNI manager for ""
	I1124 13:47:35.254446  608917 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:47:35.254457  608917 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:47:35.254652  608917 start.go:353] cluster config:
	{Name:no-preload-608395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-608395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:47:35.256201  608917 out.go:179] * Starting "no-preload-608395" primary control-plane node in "no-preload-608395" cluster
	I1124 13:47:35.257392  608917 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 13:47:35.258857  608917 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:47:35.260330  608917 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 13:47:35.260404  608917 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:47:35.260496  608917 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/config.json ...
	I1124 13:47:35.260537  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/config.json: {Name:mk2f4d5eff7070dcec35f39f30e01cd0b3fcce8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:35.260546  608917 cache.go:107] acquiring lock: {Name:mk28ec677a69a6f418643b8b89331fa25b8c42f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260546  608917 cache.go:107] acquiring lock: {Name:mkad3cbb6fa2e7f41e4d7c0e1e3c74156dc55521 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260557  608917 cache.go:107] acquiring lock: {Name:mk7aef7fc4ff6e4e4541fdeb1d5e26c13a66856b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260584  608917 cache.go:107] acquiring lock: {Name:mk586ecbe7f4b4aab48f8ad28d0d7b1848898c9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260604  608917 cache.go:107] acquiring lock: {Name:mkf548ea8c9721a4e4ad1e37073c3deea8530810 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260622  608917 cache.go:107] acquiring lock: {Name:mk1ce266bd6b9003a6a371facbc84809dce0c3c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260651  608917 cache.go:107] acquiring lock: {Name:mk687b2dcc146d43e9d607f472f2f08a2307baed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260663  608917 cache.go:107] acquiring lock: {Name:mk4b559f0fdae6e96edea26981618bf8d9d50b2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260712  608917 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:35.260755  608917 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:35.260801  608917 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:35.260819  608917 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:35.260852  608917 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:35.260858  608917 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 13:47:35.260727  608917 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:35.261039  608917 cache.go:115] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 13:47:35.261050  608917 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 523.955µs
	I1124 13:47:35.261069  608917 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 13:47:35.262249  608917 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:35.262277  608917 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:35.262359  608917 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:35.262407  608917 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 13:47:35.262461  608917 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:35.262522  608917 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:35.262735  608917 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:35.285963  608917 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 13:47:35.285989  608917 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 13:47:35.286014  608917 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:47:35.286057  608917 start.go:360] acquireMachinesLock for no-preload-608395: {Name:mkc9d1cf0cec9be2b369f1e47c690fc0399e88e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.286191  608917 start.go:364] duration metric: took 102.178µs to acquireMachinesLock for "no-preload-608395"
	I1124 13:47:35.286224  608917 start.go:93] Provisioning new machine with config: &{Name:no-preload-608395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-608395 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 13:47:35.286330  608917 start.go:125] createHost starting for "" (driver="docker")
	I1124 13:47:30.558317  607669 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 13:47:30.558626  607669 start.go:159] libmachine.API.Create for "old-k8s-version-513442" (driver="docker")
	I1124 13:47:30.558656  607669 client.go:173] LocalClient.Create starting
	I1124 13:47:30.558725  607669 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem
	I1124 13:47:30.558754  607669 main.go:143] libmachine: Decoding PEM data...
	I1124 13:47:30.558772  607669 main.go:143] libmachine: Parsing certificate...
	I1124 13:47:30.558826  607669 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem
	I1124 13:47:30.558849  607669 main.go:143] libmachine: Decoding PEM data...
	I1124 13:47:30.558860  607669 main.go:143] libmachine: Parsing certificate...
	I1124 13:47:30.559212  607669 cli_runner.go:164] Run: docker network inspect old-k8s-version-513442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 13:47:30.577139  607669 cli_runner.go:211] docker network inspect old-k8s-version-513442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 13:47:30.577245  607669 network_create.go:284] running [docker network inspect old-k8s-version-513442] to gather additional debugging logs...
	I1124 13:47:30.577276  607669 cli_runner.go:164] Run: docker network inspect old-k8s-version-513442
	W1124 13:47:30.593786  607669 cli_runner.go:211] docker network inspect old-k8s-version-513442 returned with exit code 1
	I1124 13:47:30.593826  607669 network_create.go:287] error running [docker network inspect old-k8s-version-513442]: docker network inspect old-k8s-version-513442: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-513442 not found
	I1124 13:47:30.593854  607669 network_create.go:289] output of [docker network inspect old-k8s-version-513442]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-513442 not found
	
	** /stderr **
	I1124 13:47:30.594026  607669 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:47:30.613315  607669 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8afb578efdfa IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:5e:46:43:aa:fe} reservation:<nil>}
	I1124 13:47:30.614364  607669 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ca3a55f53176 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:98:62:4c:91:8f} reservation:<nil>}
	I1124 13:47:30.614827  607669 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e11236ccf9ba IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:36:3b:80:be:95:34} reservation:<nil>}
	I1124 13:47:30.615410  607669 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-35b7bf6fd97a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5a:12:4e:d4:19:26} reservation:<nil>}
	I1124 13:47:30.616018  607669 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-1f5932eecbe7 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:aa:ff:d3:cd:de:0f} reservation:<nil>}
	I1124 13:47:30.617269  607669 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e7fa00}
	I1124 13:47:30.617308  607669 network_create.go:124] attempt to create docker network old-k8s-version-513442 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1124 13:47:30.617398  607669 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-513442 old-k8s-version-513442
	I1124 13:47:30.671102  607669 network_create.go:108] docker network old-k8s-version-513442 192.168.94.0/24 created
	I1124 13:47:30.671150  607669 kic.go:121] calculated static IP "192.168.94.2" for the "old-k8s-version-513442" container
	I1124 13:47:30.671218  607669 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 13:47:30.689078  607669 cli_runner.go:164] Run: docker volume create old-k8s-version-513442 --label name.minikube.sigs.k8s.io=old-k8s-version-513442 --label created_by.minikube.sigs.k8s.io=true
	I1124 13:47:30.709312  607669 oci.go:103] Successfully created a docker volume old-k8s-version-513442
	I1124 13:47:30.709408  607669 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-513442-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-513442 --entrypoint /usr/bin/test -v old-k8s-version-513442:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 13:47:31.132905  607669 oci.go:107] Successfully prepared a docker volume old-k8s-version-513442
	I1124 13:47:31.132980  607669 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 13:47:31.132992  607669 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 13:47:31.133075  607669 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-513442:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 13:47:35.011677  607669 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-513442:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (3.878547269s)
	I1124 13:47:35.011716  607669 kic.go:203] duration metric: took 3.878721361s to extract preloaded images to volume ...
	W1124 13:47:35.011796  607669 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 13:47:35.011829  607669 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 13:47:35.011871  607669 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:47:35.073961  607669 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-513442 --name old-k8s-version-513442 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-513442 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-513442 --network old-k8s-version-513442 --ip 192.168.94.2 --volume old-k8s-version-513442:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:47:32.801968  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:47:32.802485  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:47:32.802542  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:47:32.802595  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:47:32.832902  572647 cri.go:89] found id: "6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:32.832956  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:32.832963  572647 cri.go:89] found id: ""
	I1124 13:47:32.832972  572647 logs.go:282] 2 containers: [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:47:32.833038  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:32.837621  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:32.841927  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:47:32.842013  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:47:32.877193  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:32.877214  572647 cri.go:89] found id: ""
	I1124 13:47:32.877223  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:47:32.877290  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:32.882239  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:47:32.882329  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:47:32.912677  572647 cri.go:89] found id: ""
	I1124 13:47:32.912709  572647 logs.go:282] 0 containers: []
	W1124 13:47:32.912727  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:47:32.912735  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:47:32.912799  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:47:32.942634  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:32.942656  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:32.942662  572647 cri.go:89] found id: ""
	I1124 13:47:32.942672  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:47:32.942735  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:32.947427  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:32.951442  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:47:32.951519  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:47:32.982583  572647 cri.go:89] found id: ""
	I1124 13:47:32.982614  572647 logs.go:282] 0 containers: []
	W1124 13:47:32.982626  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:47:32.982635  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:47:32.982706  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:47:33.013412  572647 cri.go:89] found id: "daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:33.013432  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:33.013435  572647 cri.go:89] found id: ""
	I1124 13:47:33.013444  572647 logs.go:282] 2 containers: [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:47:33.013492  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:33.017848  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:33.021955  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:47:33.022038  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:47:33.055691  572647 cri.go:89] found id: ""
	I1124 13:47:33.055722  572647 logs.go:282] 0 containers: []
	W1124 13:47:33.055733  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:47:33.055743  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:47:33.055822  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:47:33.086844  572647 cri.go:89] found id: ""
	I1124 13:47:33.086868  572647 logs.go:282] 0 containers: []
	W1124 13:47:33.086877  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:47:33.086887  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:47:33.086904  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:33.140737  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:47:33.140775  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:33.185221  572647 logs.go:123] Gathering logs for kube-controller-manager [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf] ...
	I1124 13:47:33.185259  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:33.218642  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:47:33.218669  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:47:33.251506  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:47:33.251634  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:47:33.346627  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:47:33.346672  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:47:33.363530  572647 logs.go:123] Gathering logs for kube-apiserver [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8] ...
	I1124 13:47:33.363571  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:33.400997  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:47:33.401042  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:33.446051  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:47:33.446088  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:33.484418  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:47:33.484465  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:47:33.537056  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:47:33.537093  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:47:33.611727  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:47:33.611762  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:47:33.611778  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:36.150015  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:47:36.150435  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:47:36.150499  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:47:36.150559  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:47:36.181496  572647 cri.go:89] found id: "6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:36.181524  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:36.181530  572647 cri.go:89] found id: ""
	I1124 13:47:36.181541  572647 logs.go:282] 2 containers: [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:47:36.181626  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.186587  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.190995  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:47:36.191076  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:47:35.288531  608917 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 13:47:35.288826  608917 start.go:159] libmachine.API.Create for "no-preload-608395" (driver="docker")
	I1124 13:47:35.288879  608917 client.go:173] LocalClient.Create starting
	I1124 13:47:35.288981  608917 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem
	I1124 13:47:35.289027  608917 main.go:143] libmachine: Decoding PEM data...
	I1124 13:47:35.289053  608917 main.go:143] libmachine: Parsing certificate...
	I1124 13:47:35.289129  608917 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem
	I1124 13:47:35.289159  608917 main.go:143] libmachine: Decoding PEM data...
	I1124 13:47:35.289172  608917 main.go:143] libmachine: Parsing certificate...
	I1124 13:47:35.289667  608917 cli_runner.go:164] Run: docker network inspect no-preload-608395 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 13:47:35.309178  608917 cli_runner.go:211] docker network inspect no-preload-608395 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 13:47:35.309257  608917 network_create.go:284] running [docker network inspect no-preload-608395] to gather additional debugging logs...
	I1124 13:47:35.309283  608917 cli_runner.go:164] Run: docker network inspect no-preload-608395
	W1124 13:47:35.328323  608917 cli_runner.go:211] docker network inspect no-preload-608395 returned with exit code 1
	I1124 13:47:35.328350  608917 network_create.go:287] error running [docker network inspect no-preload-608395]: docker network inspect no-preload-608395: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-608395 not found
	I1124 13:47:35.328362  608917 network_create.go:289] output of [docker network inspect no-preload-608395]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-608395 not found
	
	** /stderr **
	I1124 13:47:35.328448  608917 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:47:35.351281  608917 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8afb578efdfa IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:5e:46:43:aa:fe} reservation:<nil>}
	I1124 13:47:35.352105  608917 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ca3a55f53176 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:98:62:4c:91:8f} reservation:<nil>}
	I1124 13:47:35.352583  608917 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e11236ccf9ba IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:36:3b:80:be:95:34} reservation:<nil>}
	I1124 13:47:35.353066  608917 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-35b7bf6fd97a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5a:12:4e:d4:19:26} reservation:<nil>}
	I1124 13:47:35.353566  608917 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-1f5932eecbe7 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:aa:ff:d3:cd:de:0f} reservation:<nil>}
	I1124 13:47:35.354145  608917 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-57f535f2d59b IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:6e:28:a9:1e:8a:96} reservation:<nil>}
	I1124 13:47:35.354775  608917 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d86bc0}
	I1124 13:47:35.354805  608917 network_create.go:124] attempt to create docker network no-preload-608395 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1124 13:47:35.354861  608917 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-608395 no-preload-608395
	I1124 13:47:35.432539  608917 network_create.go:108] docker network no-preload-608395 192.168.103.0/24 created
	I1124 13:47:35.432598  608917 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-608395" container
	I1124 13:47:35.432695  608917 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 13:47:35.453593  608917 cli_runner.go:164] Run: docker volume create no-preload-608395 --label name.minikube.sigs.k8s.io=no-preload-608395 --label created_by.minikube.sigs.k8s.io=true
	I1124 13:47:35.471825  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1124 13:47:35.475329  608917 oci.go:103] Successfully created a docker volume no-preload-608395
	I1124 13:47:35.475418  608917 cli_runner.go:164] Run: docker run --rm --name no-preload-608395-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-608395 --entrypoint /usr/bin/test -v no-preload-608395:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 13:47:35.484374  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1124 13:47:35.522730  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1124 13:47:35.528813  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1124 13:47:35.529239  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1124 13:47:35.541677  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1124 13:47:35.561542  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1124 13:47:35.640840  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1124 13:47:35.640868  608917 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 380.250244ms
	I1124 13:47:35.640883  608917 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 13:47:35.985260  608917 oci.go:107] Successfully prepared a docker volume no-preload-608395
	I1124 13:47:35.985319  608917 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	W1124 13:47:35.985414  608917 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 13:47:35.985453  608917 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 13:47:35.985506  608917 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:47:36.047047  608917 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-608395 --name no-preload-608395 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-608395 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-608395 --network no-preload-608395 --ip 192.168.103.2 --volume no-preload-608395:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:47:36.258467  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1124 13:47:36.258503  608917 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 997.955969ms
	I1124 13:47:36.258519  608917 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1124 13:47:36.410125  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Running}}
	I1124 13:47:36.432289  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:47:36.453312  608917 cli_runner.go:164] Run: docker exec no-preload-608395 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:47:36.504193  608917 oci.go:144] the created container "no-preload-608395" has a running status.
	I1124 13:47:36.504226  608917 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa...
	I1124 13:47:36.604837  608917 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:47:36.631267  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:47:36.655799  608917 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:47:36.655830  608917 kic_runner.go:114] Args: [docker exec --privileged no-preload-608395 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:47:36.705661  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:47:36.729778  608917 machine.go:94] provisionDockerMachine start ...
	I1124 13:47:36.729884  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:36.756901  608917 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:36.757367  608917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1124 13:47:36.757380  608917 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:47:36.758446  608917 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 13:47:37.510037  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1124 13:47:37.510068  608917 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 2.249448579s
	I1124 13:47:37.510081  608917 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1124 13:47:37.572176  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1124 13:47:37.572211  608917 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 2.31168357s
	I1124 13:47:37.572229  608917 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1124 13:47:37.595833  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1124 13:47:37.595868  608917 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 2.335217312s
	I1124 13:47:37.595886  608917 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1124 13:47:37.719899  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1124 13:47:37.719956  608917 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 2.45935214s
	I1124 13:47:37.719969  608917 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1124 13:47:38.059972  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1124 13:47:38.060022  608917 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.799433794s
	I1124 13:47:38.060036  608917 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1124 13:47:38.060055  608917 cache.go:87] Successfully saved all images to host disk.
	I1124 13:47:39.915534  608917 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-608395
	
	I1124 13:47:39.915567  608917 ubuntu.go:182] provisioning hostname "no-preload-608395"
	I1124 13:47:39.915651  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:39.936421  608917 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:39.936658  608917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1124 13:47:39.936672  608917 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-608395 && echo "no-preload-608395" | sudo tee /etc/hostname
	I1124 13:47:35.415632  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Running}}
	I1124 13:47:35.436407  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:47:35.457824  607669 cli_runner.go:164] Run: docker exec old-k8s-version-513442 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:47:35.505936  607669 oci.go:144] the created container "old-k8s-version-513442" has a running status.
	I1124 13:47:35.505993  607669 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa...
	I1124 13:47:35.536159  607669 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:47:35.565751  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:47:35.587350  607669 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:47:35.587376  607669 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-513442 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:47:35.639485  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:47:35.659275  607669 machine.go:94] provisionDockerMachine start ...
	I1124 13:47:35.659377  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:35.682791  607669 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:35.683193  607669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1124 13:47:35.683215  607669 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:47:35.683887  607669 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57402->127.0.0.1:33435: read: connection reset by peer
	I1124 13:47:38.829345  607669 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-513442
	
	I1124 13:47:38.829376  607669 ubuntu.go:182] provisioning hostname "old-k8s-version-513442"
	I1124 13:47:38.829451  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:38.847276  607669 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:38.847521  607669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1124 13:47:38.847540  607669 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-513442 && echo "old-k8s-version-513442" | sudo tee /etc/hostname
	I1124 13:47:39.005190  607669 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-513442
	
	I1124 13:47:39.005277  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.023623  607669 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:39.023848  607669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1124 13:47:39.023866  607669 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-513442' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-513442/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-513442' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:47:39.170228  607669 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:47:39.170266  607669 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-370498/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-370498/.minikube}
	I1124 13:47:39.170286  607669 ubuntu.go:190] setting up certificates
	I1124 13:47:39.170295  607669 provision.go:84] configureAuth start
	I1124 13:47:39.170348  607669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-513442
	I1124 13:47:39.189446  607669 provision.go:143] copyHostCerts
	I1124 13:47:39.189521  607669 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem, removing ...
	I1124 13:47:39.189536  607669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem
	I1124 13:47:39.189619  607669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem (1082 bytes)
	I1124 13:47:39.189751  607669 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem, removing ...
	I1124 13:47:39.189764  607669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem
	I1124 13:47:39.189810  607669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem (1123 bytes)
	I1124 13:47:39.189989  607669 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem, removing ...
	I1124 13:47:39.190006  607669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem
	I1124 13:47:39.190054  607669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem (1675 bytes)
	I1124 13:47:39.190154  607669 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-513442 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-513442]
	I1124 13:47:39.227079  607669 provision.go:177] copyRemoteCerts
	I1124 13:47:39.227139  607669 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:47:39.227177  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.244951  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:47:39.349311  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 13:47:39.371319  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 13:47:39.391311  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 13:47:39.411071  607669 provision.go:87] duration metric: took 240.75737ms to configureAuth
	I1124 13:47:39.411102  607669 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:47:39.411303  607669 config.go:182] Loaded profile config "old-k8s-version-513442": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 13:47:39.411317  607669 machine.go:97] duration metric: took 3.752022568s to provisionDockerMachine
	I1124 13:47:39.411325  607669 client.go:176] duration metric: took 8.852661553s to LocalClient.Create
	I1124 13:47:39.411358  607669 start.go:167] duration metric: took 8.852720089s to libmachine.API.Create "old-k8s-version-513442"
	I1124 13:47:39.411372  607669 start.go:293] postStartSetup for "old-k8s-version-513442" (driver="docker")
	I1124 13:47:39.411388  607669 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:47:39.411452  607669 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:47:39.411508  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.429085  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:47:39.536320  607669 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:47:39.540367  607669 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:47:39.540402  607669 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:47:39.540414  607669 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/addons for local assets ...
	I1124 13:47:39.540470  607669 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/files for local assets ...
	I1124 13:47:39.540543  607669 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem -> 3741222.pem in /etc/ssl/certs
	I1124 13:47:39.540631  607669 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 13:47:39.549275  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:47:39.573695  607669 start.go:296] duration metric: took 162.301306ms for postStartSetup
	I1124 13:47:39.574191  607669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-513442
	I1124 13:47:39.593438  607669 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/config.json ...
	I1124 13:47:39.593801  607669 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:47:39.593897  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.615008  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:47:39.717288  607669 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:47:39.722340  607669 start.go:128] duration metric: took 9.166080327s to createHost
	I1124 13:47:39.722370  607669 start.go:83] releasing machines lock for "old-k8s-version-513442", held for 9.166275546s
	I1124 13:47:39.722447  607669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-513442
	I1124 13:47:39.743680  607669 ssh_runner.go:195] Run: cat /version.json
	I1124 13:47:39.743731  607669 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:47:39.743745  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.743812  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.763336  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:47:39.763737  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:47:39.929805  607669 ssh_runner.go:195] Run: systemctl --version
	I1124 13:47:39.938447  607669 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:47:39.944068  607669 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:47:39.944147  607669 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:47:39.974609  607669 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 13:47:39.974641  607669 start.go:496] detecting cgroup driver to use...
	I1124 13:47:39.974679  607669 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 13:47:39.974728  607669 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 13:47:39.990824  607669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 13:47:40.004856  607669 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:47:40.004920  607669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:47:40.024248  607669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:47:40.044433  607669 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:47:40.145638  607669 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:47:40.247759  607669 docker.go:234] disabling docker service ...
	I1124 13:47:40.247829  607669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:47:40.269922  607669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:47:40.284840  607669 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:47:40.379978  607669 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:47:40.471616  607669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:47:40.485207  607669 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:47:40.501980  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1124 13:47:40.513545  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 13:47:40.524134  607669 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 13:47:40.524215  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 13:47:40.533927  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:47:40.543474  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 13:47:40.553177  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:47:40.563129  607669 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:47:40.572813  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 13:47:40.583799  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 13:47:40.593872  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 13:47:40.604166  607669 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:47:40.612262  607669 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:47:40.620472  607669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:47:40.706065  607669 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 13:47:40.809269  607669 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 13:47:40.809335  607669 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 13:47:40.814110  607669 start.go:564] Will wait 60s for crictl version
	I1124 13:47:40.814187  607669 ssh_runner.go:195] Run: which crictl
	I1124 13:47:40.818745  607669 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:47:40.843808  607669 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 13:47:40.843877  607669 ssh_runner.go:195] Run: containerd --version
	I1124 13:47:40.865477  607669 ssh_runner.go:195] Run: containerd --version
	I1124 13:47:40.893673  607669 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1124 13:47:36.234464  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:36.234492  572647 cri.go:89] found id: ""
	I1124 13:47:36.234504  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:47:36.234584  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.240249  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:47:36.240335  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:47:36.279967  572647 cri.go:89] found id: ""
	I1124 13:47:36.279998  572647 logs.go:282] 0 containers: []
	W1124 13:47:36.280009  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:47:36.280027  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:47:36.280082  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:47:36.313257  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:36.313286  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:36.313292  572647 cri.go:89] found id: ""
	I1124 13:47:36.313302  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:47:36.313364  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.317818  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.322103  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:47:36.322170  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:47:36.352450  572647 cri.go:89] found id: ""
	I1124 13:47:36.352485  572647 logs.go:282] 0 containers: []
	W1124 13:47:36.352497  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:47:36.352506  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:47:36.352569  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:47:36.381849  572647 cri.go:89] found id: "daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:36.381876  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:36.381881  572647 cri.go:89] found id: ""
	I1124 13:47:36.381896  572647 logs.go:282] 2 containers: [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:47:36.381995  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.386540  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.391244  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:47:36.391326  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:47:36.425813  572647 cri.go:89] found id: ""
	I1124 13:47:36.425845  572647 logs.go:282] 0 containers: []
	W1124 13:47:36.425856  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:47:36.425864  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:47:36.425945  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:47:36.461097  572647 cri.go:89] found id: ""
	I1124 13:47:36.461127  572647 logs.go:282] 0 containers: []
	W1124 13:47:36.461139  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:47:36.461153  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:47:36.461172  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:36.499983  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:47:36.500029  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:47:36.521192  572647 logs.go:123] Gathering logs for kube-apiserver [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8] ...
	I1124 13:47:36.521223  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:36.557807  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:47:36.557859  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:47:36.611092  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:47:36.611122  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:47:36.647506  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:47:36.647538  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:47:36.773107  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:47:36.773142  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:47:36.847612  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:47:36.847637  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:47:36.847662  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:36.887116  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:47:36.887154  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:36.924700  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:47:36.924746  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:36.974655  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:47:36.974689  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:37.017086  572647 logs.go:123] Gathering logs for kube-controller-manager [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf] ...
	I1124 13:47:37.017118  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:39.548013  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:47:39.548547  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:47:39.548616  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:47:39.548676  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:47:39.577831  572647 cri.go:89] found id: "6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:39.577852  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:39.577857  572647 cri.go:89] found id: ""
	I1124 13:47:39.577867  572647 logs.go:282] 2 containers: [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:47:39.577947  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.582354  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.586625  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:47:39.586710  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:47:39.614522  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:39.614543  572647 cri.go:89] found id: ""
	I1124 13:47:39.614552  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:47:39.614607  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.619054  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:47:39.619127  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:47:39.646326  572647 cri.go:89] found id: ""
	I1124 13:47:39.646352  572647 logs.go:282] 0 containers: []
	W1124 13:47:39.646363  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:47:39.646370  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:47:39.646429  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:47:39.672725  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:39.672745  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:39.672749  572647 cri.go:89] found id: ""
	I1124 13:47:39.672757  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:47:39.672814  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.677191  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.681175  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:47:39.681258  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:47:39.708431  572647 cri.go:89] found id: ""
	I1124 13:47:39.708455  572647 logs.go:282] 0 containers: []
	W1124 13:47:39.708464  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:47:39.708470  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:47:39.708519  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:47:39.740642  572647 cri.go:89] found id: "daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:39.740666  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:39.740672  572647 cri.go:89] found id: ""
	I1124 13:47:39.740682  572647 logs.go:282] 2 containers: [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:47:39.740749  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.745558  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.749963  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:47:39.750090  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:47:39.785165  572647 cri.go:89] found id: ""
	I1124 13:47:39.785200  572647 logs.go:282] 0 containers: []
	W1124 13:47:39.785213  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:47:39.785223  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:47:39.785297  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:47:39.816314  572647 cri.go:89] found id: ""
	I1124 13:47:39.816344  572647 logs.go:282] 0 containers: []
	W1124 13:47:39.816356  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:47:39.816369  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:47:39.816386  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:39.855047  572647 logs.go:123] Gathering logs for kube-controller-manager [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf] ...
	I1124 13:47:39.855082  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:39.884850  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:47:39.884886  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:39.923160  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:47:39.923209  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:47:40.011551  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:47:40.011587  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:47:40.028754  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:47:40.028784  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:47:40.073406  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:47:40.073463  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:47:40.118088  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:47:40.118130  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:47:40.186938  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:47:40.186963  572647 logs.go:123] Gathering logs for kube-apiserver [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8] ...
	I1124 13:47:40.186979  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:40.225544  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:47:40.225575  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:40.264167  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:47:40.264212  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:40.310248  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:47:40.310285  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:40.101111  608917 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-608395
	
	I1124 13:47:40.101196  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.122644  608917 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:40.122921  608917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1124 13:47:40.122949  608917 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-608395' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-608395/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-608395' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:47:40.280196  608917 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:47:40.280226  608917 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-370498/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-370498/.minikube}
	I1124 13:47:40.280268  608917 ubuntu.go:190] setting up certificates
	I1124 13:47:40.280293  608917 provision.go:84] configureAuth start
	I1124 13:47:40.280380  608917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-608395
	I1124 13:47:40.303469  608917 provision.go:143] copyHostCerts
	I1124 13:47:40.303532  608917 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem, removing ...
	I1124 13:47:40.303543  608917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem
	I1124 13:47:40.303590  608917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem (1082 bytes)
	I1124 13:47:40.303726  608917 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem, removing ...
	I1124 13:47:40.303739  608917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem
	I1124 13:47:40.303772  608917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem (1123 bytes)
	I1124 13:47:40.303856  608917 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem, removing ...
	I1124 13:47:40.303868  608917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem
	I1124 13:47:40.303892  608917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem (1675 bytes)
	I1124 13:47:40.303983  608917 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem org=jenkins.no-preload-608395 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-608395]
	I1124 13:47:40.375070  608917 provision.go:177] copyRemoteCerts
	I1124 13:47:40.375131  608917 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:47:40.375180  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.394610  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:47:40.501959  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 13:47:40.523137  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 13:47:40.542279  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 13:47:40.562226  608917 provision.go:87] duration metric: took 281.905194ms to configureAuth
	I1124 13:47:40.562265  608917 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:47:40.562572  608917 config.go:182] Loaded profile config "no-preload-608395": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:47:40.562595  608917 machine.go:97] duration metric: took 3.832793094s to provisionDockerMachine
	I1124 13:47:40.562604  608917 client.go:176] duration metric: took 5.273718281s to LocalClient.Create
	I1124 13:47:40.562649  608917 start.go:167] duration metric: took 5.273809151s to libmachine.API.Create "no-preload-608395"
	I1124 13:47:40.562659  608917 start.go:293] postStartSetup for "no-preload-608395" (driver="docker")
	I1124 13:47:40.562671  608917 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:47:40.562721  608917 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:47:40.562769  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.582715  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:47:40.688873  608917 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:47:40.692683  608917 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:47:40.692717  608917 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:47:40.692818  608917 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/addons for local assets ...
	I1124 13:47:40.692947  608917 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/files for local assets ...
	I1124 13:47:40.693078  608917 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem -> 3741222.pem in /etc/ssl/certs
	I1124 13:47:40.693208  608917 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 13:47:40.702139  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:47:40.725883  608917 start.go:296] duration metric: took 163.205649ms for postStartSetup
	I1124 13:47:40.726376  608917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-608395
	I1124 13:47:40.744526  608917 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/config.json ...
	I1124 13:47:40.745022  608917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:47:40.745098  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.763260  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:47:40.869180  608917 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:47:40.874423  608917 start.go:128] duration metric: took 5.58807074s to createHost
	I1124 13:47:40.874458  608917 start.go:83] releasing machines lock for "no-preload-608395", held for 5.58825096s
	I1124 13:47:40.874540  608917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-608395
	I1124 13:47:40.896709  608917 ssh_runner.go:195] Run: cat /version.json
	I1124 13:47:40.896763  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.896807  608917 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:47:40.896904  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.918859  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:47:40.920576  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:47:41.084454  608917 ssh_runner.go:195] Run: systemctl --version
	I1124 13:47:41.091582  608917 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:47:41.097406  608917 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:47:41.097478  608917 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:47:41.125540  608917 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 13:47:41.125566  608917 start.go:496] detecting cgroup driver to use...
	I1124 13:47:41.125601  608917 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 13:47:41.125650  608917 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 13:47:41.148294  608917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 13:47:41.167664  608917 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:47:41.167740  608917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:47:41.189235  608917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:47:41.213594  608917 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:47:41.336134  608917 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:47:41.426955  608917 docker.go:234] disabling docker service ...
	I1124 13:47:41.427023  608917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:47:41.448189  608917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:47:41.462073  608917 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:47:41.548298  608917 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:47:41.635202  608917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:47:41.649149  608917 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:47:41.664451  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 13:47:41.676460  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 13:47:41.686131  608917 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 13:47:41.686199  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 13:47:41.695720  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:47:41.705503  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 13:47:41.714879  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:47:41.724369  608917 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:47:41.733131  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 13:47:41.742525  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 13:47:41.751826  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 13:47:41.762473  608917 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:47:41.770755  608917 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:47:41.779154  608917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:47:41.869150  608917 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 13:47:41.957807  608917 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 13:47:41.957876  608917 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 13:47:41.965431  608917 start.go:564] Will wait 60s for crictl version
	I1124 13:47:41.965500  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:41.970973  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:47:42.001317  608917 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 13:47:42.001405  608917 ssh_runner.go:195] Run: containerd --version
	I1124 13:47:42.026320  608917 ssh_runner.go:195] Run: containerd --version
	I1124 13:47:42.052318  608917 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 13:47:40.896022  607669 cli_runner.go:164] Run: docker network inspect old-k8s-version-513442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:47:40.918522  607669 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 13:47:40.923315  607669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:47:40.935781  607669 kubeadm.go:884] updating cluster {Name:old-k8s-version-513442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-513442 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:47:40.935932  607669 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 13:47:40.935998  607669 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:47:40.965650  607669 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 13:47:40.965689  607669 containerd.go:534] Images already preloaded, skipping extraction
	I1124 13:47:40.965773  607669 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:47:40.999412  607669 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 13:47:40.999441  607669 cache_images.go:86] Images are preloaded, skipping loading
	I1124 13:47:40.999451  607669 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 containerd true true} ...
	I1124 13:47:40.999568  607669 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-513442 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-513442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:47:40.999640  607669 ssh_runner.go:195] Run: sudo crictl info
	I1124 13:47:41.030216  607669 cni.go:84] Creating CNI manager for ""
	I1124 13:47:41.030250  607669 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:47:41.030273  607669 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:47:41.030304  607669 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-513442 NodeName:old-k8s-version-513442 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:47:41.030479  607669 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-513442"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:47:41.030593  607669 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 13:47:41.040496  607669 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 13:47:41.040574  607669 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:47:41.048965  607669 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1124 13:47:41.063246  607669 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:47:41.080199  607669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I1124 13:47:41.095141  607669 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:47:41.099735  607669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:47:41.111816  607669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:47:41.205774  607669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:47:41.229647  607669 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442 for IP: 192.168.94.2
	I1124 13:47:41.229678  607669 certs.go:195] generating shared ca certs ...
	I1124 13:47:41.229702  607669 certs.go:227] acquiring lock for ca certs: {Name:mk5874497fda855b1e2ff816147ffdfbc44946ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.229867  607669 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.key
	I1124 13:47:41.229906  607669 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.key
	I1124 13:47:41.229935  607669 certs.go:257] generating profile certs ...
	I1124 13:47:41.230010  607669 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.key
	I1124 13:47:41.230025  607669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.crt with IP's: []
	I1124 13:47:41.438692  607669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.crt ...
	I1124 13:47:41.438735  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.crt: {Name:mkbb44e092f1569b20ffeeea6d19871e0c7ea39c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.438903  607669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.key ...
	I1124 13:47:41.438942  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.key: {Name:mkcdbea7ce1dc4681fc91bbc4b78d2c028c94687 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.439100  607669 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key.eabc0cb4
	I1124 13:47:41.439127  607669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt.eabc0cb4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1124 13:47:41.518895  607669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt.eabc0cb4 ...
	I1124 13:47:41.518941  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt.eabc0cb4: {Name:mk47b90333d21f736ed33504f6da28b133242551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.519134  607669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key.eabc0cb4 ...
	I1124 13:47:41.519153  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key.eabc0cb4: {Name:mk4592466df77ceb7a68fa27e5f9a0201b1a8063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.519239  607669 certs.go:382] copying /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt.eabc0cb4 -> /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt
	I1124 13:47:41.519312  607669 certs.go:386] copying /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key.eabc0cb4 -> /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key
	I1124 13:47:41.519368  607669 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.key
	I1124 13:47:41.519388  607669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.crt with IP's: []
	I1124 13:47:41.757186  607669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.crt ...
	I1124 13:47:41.757217  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.crt: {Name:mkb434108adbee544176aebf04c9ed8a63b76175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.757418  607669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.key ...
	I1124 13:47:41.757442  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.key: {Name:mk640e3789cee888121bd6cc947590ae24e90dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.757683  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122.pem (1338 bytes)
	W1124 13:47:41.757725  607669 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122_empty.pem, impossibly tiny 0 bytes
	I1124 13:47:41.757736  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 13:47:41.757777  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem (1082 bytes)
	I1124 13:47:41.757814  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:47:41.757849  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem (1675 bytes)
	I1124 13:47:41.757940  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:47:41.758610  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:47:41.778634  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 13:47:41.799349  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:47:41.825279  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 13:47:41.844900  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 13:47:41.865036  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 13:47:41.887428  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:47:41.912645  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 13:47:41.937284  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:47:41.966303  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122.pem --> /usr/share/ca-certificates/374122.pem (1338 bytes)
	I1124 13:47:41.989056  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /usr/share/ca-certificates/3741222.pem (1708 bytes)
	I1124 13:47:42.011989  607669 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:47:42.027976  607669 ssh_runner.go:195] Run: openssl version
	I1124 13:47:42.036340  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3741222.pem && ln -fs /usr/share/ca-certificates/3741222.pem /etc/ssl/certs/3741222.pem"
	I1124 13:47:42.046698  607669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3741222.pem
	I1124 13:47:42.051406  607669 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:20 /usr/share/ca-certificates/3741222.pem
	I1124 13:47:42.051481  607669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3741222.pem
	I1124 13:47:42.089903  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3741222.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 13:47:42.100357  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:47:42.110986  607669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:42.115955  607669 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:42.116031  607669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:42.153310  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:47:42.163209  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/374122.pem && ln -fs /usr/share/ca-certificates/374122.pem /etc/ssl/certs/374122.pem"
	I1124 13:47:42.173625  607669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/374122.pem
	I1124 13:47:42.178229  607669 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:20 /usr/share/ca-certificates/374122.pem
	I1124 13:47:42.178308  607669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/374122.pem
	I1124 13:47:42.216281  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/374122.pem /etc/ssl/certs/51391683.0"
	I1124 13:47:42.228415  607669 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:47:42.232854  607669 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:47:42.232959  607669 kubeadm.go:401] StartCluster: {Name:old-k8s-version-513442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-513442 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:47:42.233058  607669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 13:47:42.233119  607669 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:47:42.262130  607669 cri.go:89] found id: ""
	I1124 13:47:42.262225  607669 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:47:42.271622  607669 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:47:42.280568  607669 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 13:47:42.280637  607669 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:47:42.289222  607669 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:47:42.289241  607669 kubeadm.go:158] found existing configuration files:
	
	I1124 13:47:42.289287  607669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:47:42.297481  607669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:47:42.297560  607669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:47:42.306305  607669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:47:42.315150  607669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:47:42.315224  607669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:47:42.324595  607669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:47:42.333840  607669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:47:42.333922  607669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:47:42.344021  607669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:47:42.355171  607669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:47:42.355226  607669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:47:42.364345  607669 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 13:47:42.433190  607669 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1124 13:47:42.433270  607669 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:47:42.487608  607669 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 13:47:42.487695  607669 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 13:47:42.487758  607669 kubeadm.go:319] OS: Linux
	I1124 13:47:42.487823  607669 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 13:47:42.487892  607669 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 13:47:42.487986  607669 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 13:47:42.488057  607669 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 13:47:42.488125  607669 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 13:47:42.488216  607669 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 13:47:42.488285  607669 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 13:47:42.488352  607669 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 13:47:42.585565  607669 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:47:42.585750  607669 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:47:42.585896  607669 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1124 13:47:42.762435  607669 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:47:42.054673  608917 cli_runner.go:164] Run: docker network inspect no-preload-608395 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:47:42.073094  608917 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1124 13:47:42.078208  608917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:47:42.089858  608917 kubeadm.go:884] updating cluster {Name:no-preload-608395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-608395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:47:42.090126  608917 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 13:47:42.090181  608917 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:47:42.117576  608917 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1124 13:47:42.117601  608917 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1124 13:47:42.117671  608917 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:42.117683  608917 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.117696  608917 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 13:47:42.117708  608917 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.117683  608917 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.117737  608917 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.117738  608917 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.117773  608917 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.119957  608917 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.120028  608917 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.120041  608917 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.120103  608917 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.120144  608917 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.120206  608917 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 13:47:42.120361  608917 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:42.120651  608917 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.324599  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1124 13:47:42.324658  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.329752  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1124 13:47:42.329811  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1124 13:47:42.340410  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1124 13:47:42.340483  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.345994  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1124 13:47:42.346082  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.350632  608917 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1124 13:47:42.350771  608917 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.350861  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.354889  608917 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1124 13:47:42.355021  608917 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1124 13:47:42.355078  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.365506  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1124 13:47:42.365584  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.370164  608917 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1124 13:47:42.370246  608917 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.370299  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.371573  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.371569  608917 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1124 13:47:42.371633  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 13:47:42.371663  608917 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.371700  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.383984  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115"
	I1124 13:47:42.384064  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.391339  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1124 13:47:42.391424  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.394058  608917 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1124 13:47:42.394107  608917 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.394139  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.394173  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.394139  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.410796  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 13:47:42.412029  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.415223  608917 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1124 13:47:42.415273  608917 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.415318  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.430558  608917 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1124 13:47:42.430610  608917 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.430661  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.432115  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.432240  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.432710  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.449068  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 13:47:42.451309  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.451333  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.451434  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.471426  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.471426  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.472006  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.507575  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1124 13:47:42.507696  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1124 13:47:42.507737  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.507752  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.507776  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1124 13:47:42.507812  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 13:47:42.512031  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1124 13:47:42.512160  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1124 13:47:42.512183  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.512220  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1124 13:47:42.512281  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1124 13:47:42.542249  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1124 13:47:42.542293  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1124 13:47:42.542356  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.542419  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.542436  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1124 13:47:42.542450  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1124 13:47:42.542460  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1124 13:47:42.542482  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1124 13:47:42.542522  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1124 13:47:42.542541  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1124 13:47:42.547506  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1124 13:47:42.547609  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 13:47:42.591222  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1124 13:47:42.591265  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1124 13:47:42.591339  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1124 13:47:42.591358  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1124 13:47:42.630891  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1124 13:47:42.630960  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1124 13:47:42.635881  608917 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1124 13:47:42.635984  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1124 13:47:42.696822  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1124 13:47:42.696868  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1124 13:47:42.696964  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1124 13:47:42.696987  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1124 13:47:42.855586  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1124 13:47:43.017613  608917 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1124 13:47:43.017692  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1124 13:47:43.363331  608917 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1124 13:47:43.363429  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:44.322473  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.304751727s)
	I1124 13:47:44.322506  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1124 13:47:44.322534  608917 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1124 13:47:44.322535  608917 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1124 13:47:44.322572  608917 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:44.322581  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1124 13:47:44.322611  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:44.327186  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:42.765072  607669 out.go:252]   - Generating certificates and keys ...
	I1124 13:47:42.765189  607669 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 13:47:42.765429  607669 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 13:47:42.918631  607669 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 13:47:43.145530  607669 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 13:47:43.262863  607669 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 13:47:43.516853  607669 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 13:47:43.680193  607669 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 13:47:43.680382  607669 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-513442] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 13:47:43.927450  607669 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 13:47:43.927668  607669 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-513442] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 13:47:44.210866  607669 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 13:47:44.444469  607669 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 13:47:44.571652  607669 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 13:47:44.571791  607669 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 13:47:44.658495  607669 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 13:47:44.899827  607669 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 13:47:45.259836  607669 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 13:47:45.407067  607669 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 13:47:45.407645  607669 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 13:47:45.412109  607669 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 13:47:42.868629  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:47:45.407011  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.084400483s)
	I1124 13:47:45.407048  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1124 13:47:45.407074  608917 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1124 13:47:45.407121  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1124 13:47:45.407011  608917 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.079785919s)
	I1124 13:47:45.407225  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:46.754417  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.347254819s)
	I1124 13:47:46.754464  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1124 13:47:46.754487  608917 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 13:47:46.754539  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 13:47:46.754423  608917 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.34716741s)
	I1124 13:47:46.754625  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:46.791381  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1124 13:47:46.791500  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1124 13:47:48.250258  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.49567347s)
	I1124 13:47:48.250293  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1124 13:47:48.250322  608917 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 13:47:48.250369  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 13:47:48.250393  608917 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.458859359s)
	I1124 13:47:48.250436  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1124 13:47:48.250458  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1124 13:47:49.525346  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.274952475s)
	I1124 13:47:49.525372  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1124 13:47:49.525397  608917 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1124 13:47:49.525432  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1124 13:47:45.413783  607669 out.go:252]   - Booting up control plane ...
	I1124 13:47:45.414000  607669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 13:47:45.414122  607669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 13:47:45.415606  607669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 13:47:45.433197  607669 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 13:47:45.434777  607669 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 13:47:45.434850  607669 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 13:47:45.555124  607669 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1124 13:47:47.870054  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 13:47:47.870131  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:47:47.870207  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:47:47.909612  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:47:47.909637  572647 cri.go:89] found id: "6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:47.909644  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:47.909649  572647 cri.go:89] found id: ""
	I1124 13:47:47.909660  572647 logs.go:282] 3 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:47:47.909721  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:47.915163  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:47.920826  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:47.926251  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:47:47.926326  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:47:47.968362  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:47.968399  572647 cri.go:89] found id: ""
	I1124 13:47:47.968412  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:47:47.968487  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:47.973840  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:47:47.973955  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:47:48.011120  572647 cri.go:89] found id: ""
	I1124 13:47:48.011151  572647 logs.go:282] 0 containers: []
	W1124 13:47:48.011163  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:47:48.011172  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:47:48.011242  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:47:48.049409  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:48.049433  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:48.049439  572647 cri.go:89] found id: ""
	I1124 13:47:48.049449  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:47:48.049612  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:48.055041  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:48.061717  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:47:48.061795  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:47:48.098008  572647 cri.go:89] found id: ""
	I1124 13:47:48.098036  572647 logs.go:282] 0 containers: []
	W1124 13:47:48.098048  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:47:48.098056  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:47:48.098116  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:47:48.134832  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:47:48.134858  572647 cri.go:89] found id: "daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:48.134864  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:48.134868  572647 cri.go:89] found id: ""
	I1124 13:47:48.134879  572647 logs.go:282] 3 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:47:48.134960  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:48.140512  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:48.146067  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:48.151167  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:47:48.151293  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:47:48.194241  572647 cri.go:89] found id: ""
	I1124 13:47:48.194275  572647 logs.go:282] 0 containers: []
	W1124 13:47:48.194287  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:47:48.194297  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:47:48.194366  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:47:48.235586  572647 cri.go:89] found id: ""
	I1124 13:47:48.235617  572647 logs.go:282] 0 containers: []
	W1124 13:47:48.235629  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:47:48.235644  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:47:48.235660  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:48.322131  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:47:48.322175  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:47:48.358925  572647 logs.go:123] Gathering logs for kube-controller-manager [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf] ...
	I1124 13:47:48.358964  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:48.399403  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:47:48.399439  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:47:48.442576  572647 logs.go:123] Gathering logs for kube-apiserver [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8] ...
	I1124 13:47:48.442621  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:48.490297  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:47:48.490336  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:48.543239  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:47:48.543277  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:48.591561  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:47:48.591604  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:48.639975  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:47:48.640012  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:47:48.703335  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:47:48.703393  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:47:48.760778  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:47:48.760820  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:47:48.887283  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:47:48.887328  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:47:48.915138  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:47:48.915177  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1124 13:47:50.557442  607669 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.002632 seconds
	I1124 13:47:50.557627  607669 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 13:47:50.572390  607669 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 13:47:51.098533  607669 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 13:47:51.098764  607669 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-513442 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 13:47:51.610053  607669 kubeadm.go:319] [bootstrap-token] Using token: eki30b.4i7191y9601t9kqb
	I1124 13:47:51.611988  607669 out.go:252]   - Configuring RBAC rules ...
	I1124 13:47:51.612142  607669 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 13:47:51.618056  607669 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 13:47:51.627751  607669 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 13:47:51.631902  607669 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 13:47:51.635666  607669 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 13:47:51.643042  607669 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 13:47:51.655046  607669 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 13:47:51.879254  607669 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 13:47:52.022857  607669 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 13:47:52.024273  607669 kubeadm.go:319] 
	I1124 13:47:52.024439  607669 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 13:47:52.024451  607669 kubeadm.go:319] 
	I1124 13:47:52.024565  607669 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 13:47:52.024593  607669 kubeadm.go:319] 
	I1124 13:47:52.024628  607669 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 13:47:52.024712  607669 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 13:47:52.024786  607669 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 13:47:52.024795  607669 kubeadm.go:319] 
	I1124 13:47:52.024870  607669 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 13:47:52.024880  607669 kubeadm.go:319] 
	I1124 13:47:52.024984  607669 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 13:47:52.024995  607669 kubeadm.go:319] 
	I1124 13:47:52.025066  607669 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 13:47:52.025175  607669 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 13:47:52.025273  607669 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 13:47:52.025282  607669 kubeadm.go:319] 
	I1124 13:47:52.025399  607669 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 13:47:52.025508  607669 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 13:47:52.025517  607669 kubeadm.go:319] 
	I1124 13:47:52.025633  607669 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token eki30b.4i7191y9601t9kqb \
	I1124 13:47:52.025782  607669 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:32fb1839a00503b33822b75b81c2f42d5061d18404c0a5cd12189dec7e20658c \
	I1124 13:47:52.025814  607669 kubeadm.go:319] 	--control-plane 
	I1124 13:47:52.025823  607669 kubeadm.go:319] 
	I1124 13:47:52.025955  607669 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 13:47:52.025964  607669 kubeadm.go:319] 
	I1124 13:47:52.026081  607669 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token eki30b.4i7191y9601t9kqb \
	I1124 13:47:52.026226  607669 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:32fb1839a00503b33822b75b81c2f42d5061d18404c0a5cd12189dec7e20658c 
	I1124 13:47:52.029215  607669 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 13:47:52.029395  607669 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 13:47:52.029436  607669 cni.go:84] Creating CNI manager for ""
	I1124 13:47:52.029450  607669 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:47:52.032075  607669 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 13:47:52.378094  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (2.852631537s)
	I1124 13:47:52.378131  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1124 13:47:52.378164  608917 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 13:47:52.378216  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1124 13:47:52.826755  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 13:47:52.826808  608917 cache_images.go:125] Successfully loaded all cached images
	I1124 13:47:52.826816  608917 cache_images.go:94] duration metric: took 10.70919772s to LoadCachedImages
	I1124 13:47:52.826831  608917 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 containerd true true} ...
	I1124 13:47:52.826984  608917 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-608395 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-608395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:47:52.827057  608917 ssh_runner.go:195] Run: sudo crictl info
	I1124 13:47:52.858503  608917 cni.go:84] Creating CNI manager for ""
	I1124 13:47:52.858531  608917 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:47:52.858557  608917 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:47:52.858588  608917 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-608395 NodeName:no-preload-608395 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:47:52.858757  608917 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-608395"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:47:52.858835  608917 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 13:47:52.869416  608917 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1124 13:47:52.869483  608917 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1124 13:47:52.881260  608917 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1124 13:47:52.881274  608917 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1124 13:47:52.881284  608917 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1124 13:47:52.881370  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1124 13:47:52.886648  608917 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1124 13:47:52.886683  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1124 13:47:53.829310  608917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:47:53.844364  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1124 13:47:53.848663  608917 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1124 13:47:53.848703  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1124 13:47:54.078871  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1124 13:47:54.083904  608917 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1124 13:47:54.083971  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1124 13:47:54.263727  608917 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:47:54.272819  608917 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1124 13:47:54.287533  608917 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:47:54.307319  608917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1124 13:47:54.321728  608917 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:47:54.326108  608917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:47:54.337568  608917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:47:54.423252  608917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:47:54.446892  608917 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395 for IP: 192.168.103.2
	I1124 13:47:54.446932  608917 certs.go:195] generating shared ca certs ...
	I1124 13:47:54.446950  608917 certs.go:227] acquiring lock for ca certs: {Name:mk5874497fda855b1e2ff816147ffdfbc44946ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.447115  608917 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.key
	I1124 13:47:54.447173  608917 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.key
	I1124 13:47:54.447189  608917 certs.go:257] generating profile certs ...
	I1124 13:47:54.447250  608917 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.key
	I1124 13:47:54.447265  608917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt with IP's: []
	I1124 13:47:54.480111  608917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt ...
	I1124 13:47:54.480143  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt: {Name:mk0373d89f453529126dca865f8c4273a9b76c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.480318  608917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.key ...
	I1124 13:47:54.480326  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.key: {Name:mkd4fd6c97a850045d4415dcd6682504ca05b6b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.480412  608917 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key.211f6cd0
	I1124 13:47:54.480432  608917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt.211f6cd0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1124 13:47:54.564575  608917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt.211f6cd0 ...
	I1124 13:47:54.564606  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt.211f6cd0: {Name:mk39921501aaa8b9dfdaa0c59584189fbc232834 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.564812  608917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key.211f6cd0 ...
	I1124 13:47:54.564832  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key.211f6cd0: {Name:mk1e5ec23cae444088ab39a7c9f4bd7f0b68695e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.565002  608917 certs.go:382] copying /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt.211f6cd0 -> /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt
	I1124 13:47:54.565092  608917 certs.go:386] copying /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key.211f6cd0 -> /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key
	I1124 13:47:54.565147  608917 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.key
	I1124 13:47:54.565166  608917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.crt with IP's: []
	I1124 13:47:54.682010  608917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.crt ...
	I1124 13:47:54.682042  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.crt: {Name:mk61707e6277a856c1f1cee667479489cd8cfc56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.682251  608917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.key ...
	I1124 13:47:54.682270  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.key: {Name:mkdc07f88aff1f58330c9757ac629acf2062c9ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.682520  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122.pem (1338 bytes)
	W1124 13:47:54.682564  608917 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122_empty.pem, impossibly tiny 0 bytes
	I1124 13:47:54.682574  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 13:47:54.682602  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem (1082 bytes)
	I1124 13:47:54.682626  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:47:54.682651  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem (1675 bytes)
	I1124 13:47:54.682697  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:47:54.683371  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:47:54.703387  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 13:47:54.722770  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:47:54.743107  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 13:47:54.763697  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 13:47:54.783164  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 13:47:54.802752  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:47:54.822653  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 13:47:54.843126  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122.pem --> /usr/share/ca-certificates/374122.pem (1338 bytes)
	I1124 13:47:54.867619  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /usr/share/ca-certificates/3741222.pem (1708 bytes)
	I1124 13:47:54.887814  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:47:54.907876  608917 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:47:54.922379  608917 ssh_runner.go:195] Run: openssl version
	I1124 13:47:54.929636  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/374122.pem && ln -fs /usr/share/ca-certificates/374122.pem /etc/ssl/certs/374122.pem"
	I1124 13:47:54.940237  608917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/374122.pem
	I1124 13:47:54.944856  608917 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:20 /usr/share/ca-certificates/374122.pem
	I1124 13:47:54.944961  608917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/374122.pem
	I1124 13:47:54.983788  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/374122.pem /etc/ssl/certs/51391683.0"
	I1124 13:47:54.994031  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3741222.pem && ln -fs /usr/share/ca-certificates/3741222.pem /etc/ssl/certs/3741222.pem"
	I1124 13:47:55.004849  608917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3741222.pem
	I1124 13:47:55.010168  608917 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:20 /usr/share/ca-certificates/3741222.pem
	I1124 13:47:55.010231  608917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3741222.pem
	I1124 13:47:55.048930  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3741222.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 13:47:55.058618  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:47:55.068496  608917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:52.033462  607669 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 13:47:52.040052  607669 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1124 13:47:52.040080  607669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 13:47:52.058896  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 13:47:52.863538  607669 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 13:47:52.863612  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:52.863691  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-513442 minikube.k8s.io/updated_at=2025_11_24T13_47_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=old-k8s-version-513442 minikube.k8s.io/primary=true
	I1124 13:47:52.876635  607669 ops.go:34] apiserver oom_adj: -16
	I1124 13:47:52.948231  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:53.449196  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:53.948546  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:54.448277  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:54.949098  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:55.073505  608917 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:55.073568  608917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:55.110353  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:47:55.120226  608917 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:47:55.124508  608917 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:47:55.124574  608917 kubeadm.go:401] StartCluster: {Name:no-preload-608395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-608395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:47:55.124676  608917 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 13:47:55.124734  608917 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:47:55.153610  608917 cri.go:89] found id: ""
	I1124 13:47:55.153686  608917 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:47:55.163237  608917 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:47:55.172281  608917 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 13:47:55.172352  608917 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:47:55.181432  608917 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:47:55.181458  608917 kubeadm.go:158] found existing configuration files:
	
	I1124 13:47:55.181515  608917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:47:55.190814  608917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:47:55.190897  608917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:47:55.200577  608917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:47:55.210272  608917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:47:55.210344  608917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:47:55.219990  608917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:47:55.228828  608917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:47:55.228885  608917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:47:55.238104  608917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:47:55.246631  608917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:47:55.246745  608917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:47:55.255509  608917 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 13:47:55.316154  608917 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 13:47:55.376542  608917 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 13:47:55.448626  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:55.949156  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:56.449055  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:56.949140  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:57.448946  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:57.948732  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:58.448437  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:58.948803  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:59.449172  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:59.948946  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:59.001079  572647 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.085873793s)
	W1124 13:47:59.001127  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1124 13:47:59.001145  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:47:59.001163  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:00.448856  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:00.948957  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:01.448664  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:01.948985  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:02.448486  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:02.948890  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:03.448380  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:03.948515  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:04.448564  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:04.527535  607669 kubeadm.go:1114] duration metric: took 11.66399569s to wait for elevateKubeSystemPrivileges
	I1124 13:48:04.527576  607669 kubeadm.go:403] duration metric: took 22.29462596s to StartCluster
	I1124 13:48:04.527612  607669 settings.go:142] acquiring lock: {Name:mka599a3c9bae62ffb84d261186583052ce40f68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:48:04.527702  607669 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:48:04.529054  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/kubeconfig: {Name:mk44e8f04ffd8592063c19ad1e339ad14aaa66a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:48:04.529299  607669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 13:48:04.529306  607669 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 13:48:04.529383  607669 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 13:48:04.529498  607669 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-513442"
	I1124 13:48:04.529517  607669 config.go:182] Loaded profile config "old-k8s-version-513442": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 13:48:04.529519  607669 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-513442"
	I1124 13:48:04.529535  607669 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-513442"
	I1124 13:48:04.529561  607669 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-513442"
	I1124 13:48:04.529641  607669 host.go:66] Checking if "old-k8s-version-513442" exists ...
	I1124 13:48:04.529946  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:48:04.530180  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:48:04.531152  607669 out.go:179] * Verifying Kubernetes components...
	I1124 13:48:04.532717  607669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:48:04.557008  607669 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:48:04.558405  607669 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:48:04.558429  607669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 13:48:04.558495  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:48:04.562314  607669 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-513442"
	I1124 13:48:04.562381  607669 host.go:66] Checking if "old-k8s-version-513442" exists ...
	I1124 13:48:04.563175  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:48:04.584062  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:48:04.598587  607669 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 13:48:04.598613  607669 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 13:48:04.598683  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:48:04.628606  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:48:04.653771  607669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 13:48:04.701037  607669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:48:04.714197  607669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:48:04.765729  607669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 13:48:04.912320  607669 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1124 13:48:04.913621  607669 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-513442" to be "Ready" ...
	I1124 13:48:05.136398  607669 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 13:48:05.160590  608917 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 13:48:05.160664  608917 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:48:05.160771  608917 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 13:48:05.160854  608917 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 13:48:05.160886  608917 kubeadm.go:319] OS: Linux
	I1124 13:48:05.160993  608917 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 13:48:05.161038  608917 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 13:48:05.161128  608917 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 13:48:05.161215  608917 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 13:48:05.161290  608917 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 13:48:05.161348  608917 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 13:48:05.161407  608917 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 13:48:05.161478  608917 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 13:48:05.161607  608917 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:48:05.161758  608917 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:48:05.161894  608917 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 13:48:05.162009  608917 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:48:05.163691  608917 out.go:252]   - Generating certificates and keys ...
	I1124 13:48:05.163805  608917 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 13:48:05.163947  608917 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 13:48:05.164054  608917 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 13:48:05.164154  608917 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 13:48:05.164250  608917 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 13:48:05.164325  608917 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 13:48:05.164403  608917 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 13:48:05.164579  608917 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-608395] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1124 13:48:05.164662  608917 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 13:48:05.164844  608917 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-608395] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1124 13:48:05.164993  608917 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 13:48:05.165088  608917 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 13:48:05.165130  608917 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 13:48:05.165182  608917 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 13:48:05.165250  608917 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 13:48:05.165313  608917 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 13:48:05.165382  608917 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 13:48:05.165456  608917 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 13:48:05.165506  608917 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 13:48:05.165580  608917 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 13:48:05.165637  608917 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 13:48:05.167858  608917 out.go:252]   - Booting up control plane ...
	I1124 13:48:05.167962  608917 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 13:48:05.168043  608917 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 13:48:05.168104  608917 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 13:48:05.168199  608917 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 13:48:05.168298  608917 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 13:48:05.168436  608917 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 13:48:05.168514  608917 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 13:48:05.168558  608917 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 13:48:05.168715  608917 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 13:48:05.168854  608917 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 13:48:05.168953  608917 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001985013s
	I1124 13:48:05.169093  608917 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 13:48:05.169202  608917 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1124 13:48:05.169339  608917 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 13:48:05.169461  608917 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 13:48:05.169582  608917 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.171045551s
	I1124 13:48:05.169691  608917 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.746683308s
	I1124 13:48:05.169782  608917 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.002983514s
	I1124 13:48:05.169958  608917 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 13:48:05.170079  608917 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 13:48:05.170136  608917 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 13:48:05.170449  608917 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-608395 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 13:48:05.170534  608917 kubeadm.go:319] [bootstrap-token] Using token: 0m3tk6.bp5t9g266aj6zg5e
	I1124 13:48:05.172344  608917 out.go:252]   - Configuring RBAC rules ...
	I1124 13:48:05.172497  608917 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 13:48:05.172606  608917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 13:48:05.172790  608917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 13:48:05.172947  608917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 13:48:05.173067  608917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 13:48:05.173152  608917 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 13:48:05.173251  608917 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 13:48:05.173290  608917 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 13:48:05.173330  608917 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 13:48:05.173336  608917 kubeadm.go:319] 
	I1124 13:48:05.173391  608917 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 13:48:05.173397  608917 kubeadm.go:319] 
	I1124 13:48:05.173470  608917 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 13:48:05.173476  608917 kubeadm.go:319] 
	I1124 13:48:05.173498  608917 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 13:48:05.173553  608917 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 13:48:05.173610  608917 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 13:48:05.173623  608917 kubeadm.go:319] 
	I1124 13:48:05.173669  608917 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 13:48:05.173675  608917 kubeadm.go:319] 
	I1124 13:48:05.173718  608917 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 13:48:05.173727  608917 kubeadm.go:319] 
	I1124 13:48:05.173778  608917 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 13:48:05.173858  608917 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 13:48:05.173981  608917 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 13:48:05.173990  608917 kubeadm.go:319] 
	I1124 13:48:05.174085  608917 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 13:48:05.174165  608917 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 13:48:05.174170  608917 kubeadm.go:319] 
	I1124 13:48:05.174250  608917 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0m3tk6.bp5t9g266aj6zg5e \
	I1124 13:48:05.174352  608917 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:32fb1839a00503b33822b75b81c2f42d5061d18404c0a5cd12189dec7e20658c \
	I1124 13:48:05.174376  608917 kubeadm.go:319] 	--control-plane 
	I1124 13:48:05.174381  608917 kubeadm.go:319] 
	I1124 13:48:05.174459  608917 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 13:48:05.174465  608917 kubeadm.go:319] 
	I1124 13:48:05.174560  608917 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0m3tk6.bp5t9g266aj6zg5e \
	I1124 13:48:05.174802  608917 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:32fb1839a00503b33822b75b81c2f42d5061d18404c0a5cd12189dec7e20658c 
	I1124 13:48:05.174826  608917 cni.go:84] Creating CNI manager for ""
	I1124 13:48:05.174836  608917 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:48:05.177484  608917 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 13:48:05.137677  607669 addons.go:530] duration metric: took 608.290782ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 13:48:01.553682  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:02.346718  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:51122->192.168.76.2:8443: read: connection reset by peer
	I1124 13:48:02.346797  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:02.346868  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:02.379430  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:02.379461  572647 cri.go:89] found id: "6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:48:02.379468  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:02.379472  572647 cri.go:89] found id: ""
	I1124 13:48:02.379481  572647 logs.go:282] 3 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:02.379554  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.384666  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.389028  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.393413  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:02.393493  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:02.423298  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:02.423317  572647 cri.go:89] found id: ""
	I1124 13:48:02.423325  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:02.423377  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.428323  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:02.428396  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:02.458971  572647 cri.go:89] found id: ""
	I1124 13:48:02.459002  572647 logs.go:282] 0 containers: []
	W1124 13:48:02.459014  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:02.459023  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:02.459136  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:02.495221  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:02.495253  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:02.495258  572647 cri.go:89] found id: ""
	I1124 13:48:02.495267  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:02.495325  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.504536  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.513709  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:02.513782  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:02.545556  572647 cri.go:89] found id: ""
	I1124 13:48:02.545589  572647 logs.go:282] 0 containers: []
	W1124 13:48:02.545603  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:02.545613  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:02.545686  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:02.575683  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:02.575710  572647 cri.go:89] found id: "daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:48:02.575714  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:02.575717  572647 cri.go:89] found id: ""
	I1124 13:48:02.575725  572647 logs.go:282] 3 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:02.575799  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.580340  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.584784  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.588717  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:02.588774  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:02.617522  572647 cri.go:89] found id: ""
	I1124 13:48:02.617550  572647 logs.go:282] 0 containers: []
	W1124 13:48:02.617558  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:02.617567  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:02.617616  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:02.647375  572647 cri.go:89] found id: ""
	I1124 13:48:02.647407  572647 logs.go:282] 0 containers: []
	W1124 13:48:02.647418  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:02.647432  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:02.647445  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:02.685850  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:02.685900  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:02.794118  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:02.794164  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:02.866960  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:02.866982  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:02.866997  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:02.908627  572647 logs.go:123] Gathering logs for kube-apiserver [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8] ...
	I1124 13:48:02.908671  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:48:02.949348  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:02.949380  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:02.997498  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:02.997541  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:03.065816  572647 logs.go:123] Gathering logs for kube-controller-manager [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf] ...
	I1124 13:48:03.065856  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:48:03.101360  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:03.101393  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:03.140140  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:03.140183  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:03.160020  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:03.160058  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:03.202092  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:03.202136  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:03.247020  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:03.247060  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:03.283475  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:03.283518  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:05.832996  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:05.833478  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:05.833543  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:05.833607  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:05.862229  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:05.862254  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:05.862258  572647 cri.go:89] found id: ""
	I1124 13:48:05.862267  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:05.862320  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:05.867091  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:05.871378  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:05.871455  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:05.900338  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:05.900361  572647 cri.go:89] found id: ""
	I1124 13:48:05.900370  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:05.900428  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:05.904531  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:05.904606  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:05.933536  572647 cri.go:89] found id: ""
	I1124 13:48:05.933565  572647 logs.go:282] 0 containers: []
	W1124 13:48:05.933579  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:05.933587  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:05.933645  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:05.961942  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:05.961966  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:05.961980  572647 cri.go:89] found id: ""
	I1124 13:48:05.961988  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:05.962048  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:05.966413  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:05.970560  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:05.970640  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:05.999021  572647 cri.go:89] found id: ""
	I1124 13:48:05.999046  572647 logs.go:282] 0 containers: []
	W1124 13:48:05.999057  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:05.999065  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:05.999125  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:06.030192  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:06.030216  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:06.030222  572647 cri.go:89] found id: ""
	I1124 13:48:06.030233  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:06.030291  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:06.034509  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:06.038518  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:06.038602  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:06.067432  572647 cri.go:89] found id: ""
	I1124 13:48:06.067459  572647 logs.go:282] 0 containers: []
	W1124 13:48:06.067469  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:06.067477  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:06.067557  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:06.098683  572647 cri.go:89] found id: ""
	I1124 13:48:06.098712  572647 logs.go:282] 0 containers: []
	W1124 13:48:06.098723  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:06.098736  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:06.098753  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:06.163737  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:06.163765  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:06.163783  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:05.179143  608917 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 13:48:05.184780  608917 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 13:48:05.184802  608917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 13:48:05.199547  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 13:48:05.451312  608917 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 13:48:05.451481  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:05.451599  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-608395 minikube.k8s.io/updated_at=2025_11_24T13_48_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=no-preload-608395 minikube.k8s.io/primary=true
	I1124 13:48:05.479434  608917 ops.go:34] apiserver oom_adj: -16
	I1124 13:48:05.560179  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:06.061204  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:06.560802  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:07.061219  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:07.561139  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:08.061015  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:08.561034  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:09.061268  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:09.560397  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:09.636185  608917 kubeadm.go:1114] duration metric: took 4.184744627s to wait for elevateKubeSystemPrivileges
	I1124 13:48:09.636235  608917 kubeadm.go:403] duration metric: took 14.511667218s to StartCluster
	I1124 13:48:09.636257  608917 settings.go:142] acquiring lock: {Name:mka599a3c9bae62ffb84d261186583052ce40f68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:48:09.636332  608917 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:48:09.637980  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/kubeconfig: {Name:mk44e8f04ffd8592063c19ad1e339ad14aaa66a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:48:09.638233  608917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 13:48:09.638262  608917 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 13:48:09.638340  608917 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 13:48:09.638439  608917 addons.go:70] Setting storage-provisioner=true in profile "no-preload-608395"
	I1124 13:48:09.638460  608917 addons.go:239] Setting addon storage-provisioner=true in "no-preload-608395"
	I1124 13:48:09.638459  608917 addons.go:70] Setting default-storageclass=true in profile "no-preload-608395"
	I1124 13:48:09.638486  608917 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-608395"
	I1124 13:48:09.638512  608917 host.go:66] Checking if "no-preload-608395" exists ...
	I1124 13:48:09.638608  608917 config.go:182] Loaded profile config "no-preload-608395": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:48:09.638889  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:48:09.639090  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:48:09.640719  608917 out.go:179] * Verifying Kubernetes components...
	I1124 13:48:09.642235  608917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:48:09.665980  608917 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:48:09.668239  608917 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:48:09.668262  608917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 13:48:09.668334  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:48:09.668545  608917 addons.go:239] Setting addon default-storageclass=true in "no-preload-608395"
	I1124 13:48:09.668594  608917 host.go:66] Checking if "no-preload-608395" exists ...
	I1124 13:48:09.669115  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:48:09.708052  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:48:09.711213  608917 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 13:48:09.711236  608917 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 13:48:09.711297  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:48:09.737250  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:48:09.745340  608917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 13:48:09.808489  608917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:48:09.832661  608917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:48:09.863280  608917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 13:48:09.941101  608917 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1124 13:48:09.942521  608917 node_ready.go:35] waiting up to 6m0s for node "no-preload-608395" to be "Ready" ...
	I1124 13:48:10.163475  608917 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 13:48:05.418106  607669 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-513442" context rescaled to 1 replicas
	W1124 13:48:06.917478  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	W1124 13:48:09.417409  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	I1124 13:48:06.199640  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:06.199675  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:06.235793  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:06.235827  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:06.290172  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:06.290212  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:06.325935  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:06.325975  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:06.359485  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:06.359523  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:06.406787  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:06.406834  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:06.503206  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:06.503251  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:06.520877  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:06.520924  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:06.561472  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:06.561510  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:06.591722  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:06.591748  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:09.128043  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:09.128549  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:09.128609  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:09.128678  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:09.158194  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:09.158216  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:09.158220  572647 cri.go:89] found id: ""
	I1124 13:48:09.158229  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:09.158308  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.162575  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.167402  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:09.167472  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:09.196608  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:09.196633  572647 cri.go:89] found id: ""
	I1124 13:48:09.196645  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:09.196709  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.201107  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:09.201190  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:09.232265  572647 cri.go:89] found id: ""
	I1124 13:48:09.232300  572647 logs.go:282] 0 containers: []
	W1124 13:48:09.232311  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:09.232320  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:09.232386  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:09.272990  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:09.273017  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:09.273022  572647 cri.go:89] found id: ""
	I1124 13:48:09.273033  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:09.273100  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.278614  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.283409  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:09.283485  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:09.314562  572647 cri.go:89] found id: ""
	I1124 13:48:09.314592  572647 logs.go:282] 0 containers: []
	W1124 13:48:09.314604  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:09.314611  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:09.314682  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:09.346903  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:09.346963  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:09.346970  572647 cri.go:89] found id: ""
	I1124 13:48:09.346979  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:09.347049  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.351444  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.355601  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:09.355675  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:09.387667  572647 cri.go:89] found id: ""
	I1124 13:48:09.387697  572647 logs.go:282] 0 containers: []
	W1124 13:48:09.387709  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:09.387716  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:09.387779  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:09.417828  572647 cri.go:89] found id: ""
	I1124 13:48:09.417854  572647 logs.go:282] 0 containers: []
	W1124 13:48:09.417863  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:09.417876  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:09.417894  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:09.518663  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:09.518707  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:09.538049  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:09.538093  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:09.606209  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:09.606232  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:09.606246  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:09.646703  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:09.646736  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:09.708037  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:09.708078  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:09.779698  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:09.779735  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:09.819613  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:09.819663  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:09.867349  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:09.867388  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:09.917580  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:09.917620  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:09.959751  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:09.959793  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:10.006236  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:10.006274  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:10.165110  608917 addons.go:530] duration metric: took 526.764143ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 13:48:10.444998  608917 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-608395" context rescaled to 1 replicas
	W1124 13:48:11.948043  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	W1124 13:48:14.445721  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	W1124 13:48:11.417485  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	W1124 13:48:13.418201  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	I1124 13:48:12.563487  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:12.564031  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:12.564091  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:12.564151  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:12.598524  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:12.598553  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:12.598559  572647 cri.go:89] found id: ""
	I1124 13:48:12.598570  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:12.598654  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.603466  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.608383  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:12.608462  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:12.652395  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:12.652422  572647 cri.go:89] found id: ""
	I1124 13:48:12.652433  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:12.652503  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.657966  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:12.658060  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:12.693432  572647 cri.go:89] found id: ""
	I1124 13:48:12.693468  572647 logs.go:282] 0 containers: []
	W1124 13:48:12.693480  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:12.693489  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:12.693558  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:12.731546  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:12.731572  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:12.731579  572647 cri.go:89] found id: ""
	I1124 13:48:12.731590  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:12.731820  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.737055  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.741859  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:12.741953  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:12.776627  572647 cri.go:89] found id: ""
	I1124 13:48:12.776652  572647 logs.go:282] 0 containers: []
	W1124 13:48:12.776660  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:12.776667  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:12.776735  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:12.809077  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:12.809099  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:12.809102  572647 cri.go:89] found id: ""
	I1124 13:48:12.809112  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:12.809166  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.813963  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.818488  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:12.818563  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:12.852844  572647 cri.go:89] found id: ""
	I1124 13:48:12.852879  572647 logs.go:282] 0 containers: []
	W1124 13:48:12.852891  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:12.852900  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:12.853034  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:12.889177  572647 cri.go:89] found id: ""
	I1124 13:48:12.889228  572647 logs.go:282] 0 containers: []
	W1124 13:48:12.889240  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:12.889255  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:12.889278  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:12.941108  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:12.941146  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:13.012950  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:13.012998  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:13.059324  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:13.059367  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:13.096188  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:13.096235  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:13.157287  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:13.157338  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:13.198203  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:13.198250  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:13.219729  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:13.219773  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:13.293315  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:13.293338  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:13.293356  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:13.338975  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:13.339029  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:13.385546  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:13.385596  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:13.427130  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:13.427162  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:16.027717  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:16.028251  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:16.028310  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:16.028363  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:16.058811  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:16.058839  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:16.058847  572647 cri.go:89] found id: ""
	I1124 13:48:16.058858  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:16.058999  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.063797  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.068208  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:16.068282  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:16.097374  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:16.097404  572647 cri.go:89] found id: ""
	I1124 13:48:16.097416  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:16.097484  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.102967  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:16.103045  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:16.133626  572647 cri.go:89] found id: ""
	I1124 13:48:16.133660  572647 logs.go:282] 0 containers: []
	W1124 13:48:16.133670  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:16.133676  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:16.133746  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:16.165392  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:16.165424  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:16.165431  572647 cri.go:89] found id: ""
	I1124 13:48:16.165442  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:16.165507  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.170277  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.174579  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:16.174661  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	W1124 13:48:16.445831  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	W1124 13:48:18.945868  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	W1124 13:48:15.917184  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	W1124 13:48:17.917526  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	I1124 13:48:19.416721  607669 node_ready.go:49] node "old-k8s-version-513442" is "Ready"
	I1124 13:48:19.416760  607669 node_ready.go:38] duration metric: took 14.503103561s for node "old-k8s-version-513442" to be "Ready" ...
	I1124 13:48:19.416778  607669 api_server.go:52] waiting for apiserver process to appear ...
	I1124 13:48:19.416833  607669 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:48:19.430267  607669 api_server.go:72] duration metric: took 14.90093273s to wait for apiserver process to appear ...
	I1124 13:48:19.430299  607669 api_server.go:88] waiting for apiserver healthz status ...
	I1124 13:48:19.430326  607669 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 13:48:19.436844  607669 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 13:48:19.438582  607669 api_server.go:141] control plane version: v1.28.0
	I1124 13:48:19.438618  607669 api_server.go:131] duration metric: took 8.311152ms to wait for apiserver health ...
	I1124 13:48:19.438632  607669 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 13:48:19.443134  607669 system_pods.go:59] 8 kube-system pods found
	I1124 13:48:19.443191  607669 system_pods.go:61] "coredns-5dd5756b68-b5rrl" [4e6c9b7c-5f0a-4c60-8197-20e985a07403] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:19.443200  607669 system_pods.go:61] "etcd-old-k8s-version-513442" [0b1a1913-a17b-4362-af66-49436a831759] Running
	I1124 13:48:19.443207  607669 system_pods.go:61] "kindnet-tpjvb" [c7df115a-8394-4f80-ac6c-5b1fc95337b5] Running
	I1124 13:48:19.443213  607669 system_pods.go:61] "kube-apiserver-old-k8s-version-513442" [722a96a1-58fb-4240-9c3b-4732b2fc0877] Running
	I1124 13:48:19.443219  607669 system_pods.go:61] "kube-controller-manager-old-k8s-version-513442" [df7953a7-c9cf-4854-b6bb-c43b0415e709] Running
	I1124 13:48:19.443225  607669 system_pods.go:61] "kube-proxy-hzfcx" [f4ba208a-1a78-46ae-9684-ff3309400852] Running
	I1124 13:48:19.443231  607669 system_pods.go:61] "kube-scheduler-old-k8s-version-513442" [c400bc97-a209-437d-ba96-60c58a4b8878] Running
	I1124 13:48:19.443240  607669 system_pods.go:61] "storage-provisioner" [65efb270-100a-4e7c-bee8-24de1df28586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:19.443248  607669 system_pods.go:74] duration metric: took 4.608559ms to wait for pod list to return data ...
	I1124 13:48:19.443260  607669 default_sa.go:34] waiting for default service account to be created ...
	I1124 13:48:19.446125  607669 default_sa.go:45] found service account: "default"
	I1124 13:48:19.446157  607669 default_sa.go:55] duration metric: took 2.890045ms for default service account to be created ...
	I1124 13:48:19.446170  607669 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 13:48:19.450324  607669 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:19.450375  607669 system_pods.go:89] "coredns-5dd5756b68-b5rrl" [4e6c9b7c-5f0a-4c60-8197-20e985a07403] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:19.450385  607669 system_pods.go:89] "etcd-old-k8s-version-513442" [0b1a1913-a17b-4362-af66-49436a831759] Running
	I1124 13:48:19.450394  607669 system_pods.go:89] "kindnet-tpjvb" [c7df115a-8394-4f80-ac6c-5b1fc95337b5] Running
	I1124 13:48:19.450408  607669 system_pods.go:89] "kube-apiserver-old-k8s-version-513442" [722a96a1-58fb-4240-9c3b-4732b2fc0877] Running
	I1124 13:48:19.450415  607669 system_pods.go:89] "kube-controller-manager-old-k8s-version-513442" [df7953a7-c9cf-4854-b6bb-c43b0415e709] Running
	I1124 13:48:19.450425  607669 system_pods.go:89] "kube-proxy-hzfcx" [f4ba208a-1a78-46ae-9684-ff3309400852] Running
	I1124 13:48:19.450434  607669 system_pods.go:89] "kube-scheduler-old-k8s-version-513442" [c400bc97-a209-437d-ba96-60c58a4b8878] Running
	I1124 13:48:19.450449  607669 system_pods.go:89] "storage-provisioner" [65efb270-100a-4e7c-bee8-24de1df28586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:19.450484  607669 retry.go:31] will retry after 306.547577ms: missing components: kube-dns
	I1124 13:48:19.761785  607669 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:19.761821  607669 system_pods.go:89] "coredns-5dd5756b68-b5rrl" [4e6c9b7c-5f0a-4c60-8197-20e985a07403] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:19.761828  607669 system_pods.go:89] "etcd-old-k8s-version-513442" [0b1a1913-a17b-4362-af66-49436a831759] Running
	I1124 13:48:19.761835  607669 system_pods.go:89] "kindnet-tpjvb" [c7df115a-8394-4f80-ac6c-5b1fc95337b5] Running
	I1124 13:48:19.761839  607669 system_pods.go:89] "kube-apiserver-old-k8s-version-513442" [722a96a1-58fb-4240-9c3b-4732b2fc0877] Running
	I1124 13:48:19.761843  607669 system_pods.go:89] "kube-controller-manager-old-k8s-version-513442" [df7953a7-c9cf-4854-b6bb-c43b0415e709] Running
	I1124 13:48:19.761846  607669 system_pods.go:89] "kube-proxy-hzfcx" [f4ba208a-1a78-46ae-9684-ff3309400852] Running
	I1124 13:48:19.761850  607669 system_pods.go:89] "kube-scheduler-old-k8s-version-513442" [c400bc97-a209-437d-ba96-60c58a4b8878] Running
	I1124 13:48:19.761855  607669 system_pods.go:89] "storage-provisioner" [65efb270-100a-4e7c-bee8-24de1df28586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:19.761871  607669 retry.go:31] will retry after 263.639636ms: missing components: kube-dns
	I1124 13:48:20.030723  607669 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:20.030764  607669 system_pods.go:89] "coredns-5dd5756b68-b5rrl" [4e6c9b7c-5f0a-4c60-8197-20e985a07403] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:20.030773  607669 system_pods.go:89] "etcd-old-k8s-version-513442" [0b1a1913-a17b-4362-af66-49436a831759] Running
	I1124 13:48:20.030781  607669 system_pods.go:89] "kindnet-tpjvb" [c7df115a-8394-4f80-ac6c-5b1fc95337b5] Running
	I1124 13:48:20.030787  607669 system_pods.go:89] "kube-apiserver-old-k8s-version-513442" [722a96a1-58fb-4240-9c3b-4732b2fc0877] Running
	I1124 13:48:20.030794  607669 system_pods.go:89] "kube-controller-manager-old-k8s-version-513442" [df7953a7-c9cf-4854-b6bb-c43b0415e709] Running
	I1124 13:48:20.030799  607669 system_pods.go:89] "kube-proxy-hzfcx" [f4ba208a-1a78-46ae-9684-ff3309400852] Running
	I1124 13:48:20.030804  607669 system_pods.go:89] "kube-scheduler-old-k8s-version-513442" [c400bc97-a209-437d-ba96-60c58a4b8878] Running
	I1124 13:48:20.030812  607669 system_pods.go:89] "storage-provisioner" [65efb270-100a-4e7c-bee8-24de1df28586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:20.030836  607669 retry.go:31] will retry after 485.23875ms: missing components: kube-dns
	I1124 13:48:16.203971  572647 cri.go:89] found id: ""
	I1124 13:48:16.204004  572647 logs.go:282] 0 containers: []
	W1124 13:48:16.204016  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:16.204025  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:16.204087  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:16.233087  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:16.233113  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:16.233119  572647 cri.go:89] found id: ""
	I1124 13:48:16.233130  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:16.233184  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.237937  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.242366  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:16.242450  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:16.273007  572647 cri.go:89] found id: ""
	I1124 13:48:16.273034  572647 logs.go:282] 0 containers: []
	W1124 13:48:16.273043  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:16.273049  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:16.273100  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:16.302483  572647 cri.go:89] found id: ""
	I1124 13:48:16.302518  572647 logs.go:282] 0 containers: []
	W1124 13:48:16.302537  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:16.302553  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:16.302575  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:16.360777  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:16.360817  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:16.391672  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:16.391700  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:16.490704  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:16.490743  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:16.530411  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:16.530448  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:16.567070  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:16.567107  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:16.601689  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:16.601728  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:16.646105  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:16.646143  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:16.682522  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:16.682560  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:16.699850  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:16.699887  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:16.759811  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:16.759835  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:16.759853  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:16.795013  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:16.795048  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:19.334057  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:19.334568  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:19.334661  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:19.334733  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:19.365714  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:19.365735  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:19.365739  572647 cri.go:89] found id: ""
	I1124 13:48:19.365747  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:19.365800  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.370354  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.374856  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:19.374992  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:19.405492  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:19.405519  572647 cri.go:89] found id: ""
	I1124 13:48:19.405529  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:19.405589  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.411364  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:19.411426  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:19.443360  572647 cri.go:89] found id: ""
	I1124 13:48:19.443391  572647 logs.go:282] 0 containers: []
	W1124 13:48:19.443404  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:19.443412  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:19.443477  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:19.475298  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:19.475324  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:19.475331  572647 cri.go:89] found id: ""
	I1124 13:48:19.475341  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:19.475407  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.480369  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.484782  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:19.484863  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:19.514622  572647 cri.go:89] found id: ""
	I1124 13:48:19.514666  572647 logs.go:282] 0 containers: []
	W1124 13:48:19.514716  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:19.514726  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:19.514807  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:19.550847  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:19.550872  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:19.550877  572647 cri.go:89] found id: ""
	I1124 13:48:19.550886  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:19.550963  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.556478  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.561320  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:19.561401  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:19.596190  572647 cri.go:89] found id: ""
	I1124 13:48:19.596226  572647 logs.go:282] 0 containers: []
	W1124 13:48:19.596238  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:19.596247  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:19.596309  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:19.627382  572647 cri.go:89] found id: ""
	I1124 13:48:19.627413  572647 logs.go:282] 0 containers: []
	W1124 13:48:19.627424  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:19.627436  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:19.627452  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:19.694796  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:19.694836  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:19.752858  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:19.752896  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:19.788182  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:19.788224  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:19.879216  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:19.879255  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:19.940757  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:19.940776  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:19.940790  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:19.979681  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:19.979726  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:20.020042  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:20.020085  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:20.064463  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:20.064499  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:20.098012  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:20.098044  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:20.132122  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:20.132157  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:20.148958  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:20.148997  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:20.521094  607669 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:20.521123  607669 system_pods.go:89] "coredns-5dd5756b68-b5rrl" [4e6c9b7c-5f0a-4c60-8197-20e985a07403] Running
	I1124 13:48:20.521130  607669 system_pods.go:89] "etcd-old-k8s-version-513442" [0b1a1913-a17b-4362-af66-49436a831759] Running
	I1124 13:48:20.521133  607669 system_pods.go:89] "kindnet-tpjvb" [c7df115a-8394-4f80-ac6c-5b1fc95337b5] Running
	I1124 13:48:20.521137  607669 system_pods.go:89] "kube-apiserver-old-k8s-version-513442" [722a96a1-58fb-4240-9c3b-4732b2fc0877] Running
	I1124 13:48:20.521141  607669 system_pods.go:89] "kube-controller-manager-old-k8s-version-513442" [df7953a7-c9cf-4854-b6bb-c43b0415e709] Running
	I1124 13:48:20.521145  607669 system_pods.go:89] "kube-proxy-hzfcx" [f4ba208a-1a78-46ae-9684-ff3309400852] Running
	I1124 13:48:20.521148  607669 system_pods.go:89] "kube-scheduler-old-k8s-version-513442" [c400bc97-a209-437d-ba96-60c58a4b8878] Running
	I1124 13:48:20.521151  607669 system_pods.go:89] "storage-provisioner" [65efb270-100a-4e7c-bee8-24de1df28586] Running
	I1124 13:48:20.521159  607669 system_pods.go:126] duration metric: took 1.074982882s to wait for k8s-apps to be running ...
	I1124 13:48:20.521166  607669 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 13:48:20.521215  607669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:48:20.535666  607669 system_svc.go:56] duration metric: took 14.486184ms WaitForService to wait for kubelet
	I1124 13:48:20.535706  607669 kubeadm.go:587] duration metric: took 16.006375183s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:48:20.535732  607669 node_conditions.go:102] verifying NodePressure condition ...
	I1124 13:48:20.538619  607669 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 13:48:20.538646  607669 node_conditions.go:123] node cpu capacity is 8
	I1124 13:48:20.538662  607669 node_conditions.go:105] duration metric: took 2.9245ms to run NodePressure ...
	I1124 13:48:20.538676  607669 start.go:242] waiting for startup goroutines ...
	I1124 13:48:20.538683  607669 start.go:247] waiting for cluster config update ...
	I1124 13:48:20.538693  607669 start.go:256] writing updated cluster config ...
	I1124 13:48:20.539040  607669 ssh_runner.go:195] Run: rm -f paused
	I1124 13:48:20.543325  607669 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:48:20.547793  607669 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-b5rrl" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.552447  607669 pod_ready.go:94] pod "coredns-5dd5756b68-b5rrl" is "Ready"
	I1124 13:48:20.552472  607669 pod_ready.go:86] duration metric: took 4.651627ms for pod "coredns-5dd5756b68-b5rrl" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.556328  607669 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.561689  607669 pod_ready.go:94] pod "etcd-old-k8s-version-513442" is "Ready"
	I1124 13:48:20.561717  607669 pod_ready.go:86] duration metric: took 5.363766ms for pod "etcd-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.564634  607669 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.569265  607669 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-513442" is "Ready"
	I1124 13:48:20.569291  607669 pod_ready.go:86] duration metric: took 4.631558ms for pod "kube-apiserver-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.572304  607669 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.948397  607669 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-513442" is "Ready"
	I1124 13:48:20.948423  607669 pod_ready.go:86] duration metric: took 376.095956ms for pod "kube-controller-manager-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:21.148648  607669 pod_ready.go:83] waiting for pod "kube-proxy-hzfcx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:21.548255  607669 pod_ready.go:94] pod "kube-proxy-hzfcx" is "Ready"
	I1124 13:48:21.548288  607669 pod_ready.go:86] duration metric: took 399.608636ms for pod "kube-proxy-hzfcx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:21.748744  607669 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:22.147789  607669 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-513442" is "Ready"
	I1124 13:48:22.147821  607669 pod_ready.go:86] duration metric: took 399.0528ms for pod "kube-scheduler-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:22.147833  607669 pod_ready.go:40] duration metric: took 1.604464617s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:48:22.193883  607669 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 13:48:22.196207  607669 out.go:203] 
	W1124 13:48:22.197964  607669 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 13:48:22.199516  607669 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 13:48:22.201541  607669 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-513442" cluster and "default" namespace by default
	W1124 13:48:20.947014  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	W1124 13:48:22.948554  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	I1124 13:48:24.446130  608917 node_ready.go:49] node "no-preload-608395" is "Ready"
	I1124 13:48:24.446168  608917 node_ready.go:38] duration metric: took 14.503611427s for node "no-preload-608395" to be "Ready" ...
	I1124 13:48:24.446195  608917 api_server.go:52] waiting for apiserver process to appear ...
	I1124 13:48:24.446254  608917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:48:24.460952  608917 api_server.go:72] duration metric: took 14.82264088s to wait for apiserver process to appear ...
	I1124 13:48:24.460990  608917 api_server.go:88] waiting for apiserver healthz status ...
	I1124 13:48:24.461021  608917 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 13:48:24.466768  608917 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1124 13:48:24.468117  608917 api_server.go:141] control plane version: v1.34.1
	I1124 13:48:24.468151  608917 api_server.go:131] duration metric: took 7.151862ms to wait for apiserver health ...
	I1124 13:48:24.468164  608917 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 13:48:24.473836  608917 system_pods.go:59] 8 kube-system pods found
	I1124 13:48:24.473891  608917 system_pods.go:61] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:24.473901  608917 system_pods.go:61] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:24.473965  608917 system_pods.go:61] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:24.473980  608917 system_pods.go:61] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:24.473987  608917 system_pods.go:61] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:24.473995  608917 system_pods.go:61] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:24.474001  608917 system_pods.go:61] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:24.474011  608917 system_pods.go:61] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:24.474025  608917 system_pods.go:74] duration metric: took 5.853076ms to wait for pod list to return data ...
	I1124 13:48:24.474037  608917 default_sa.go:34] waiting for default service account to be created ...
	I1124 13:48:24.476681  608917 default_sa.go:45] found service account: "default"
	I1124 13:48:24.476712  608917 default_sa.go:55] duration metric: took 2.661232ms for default service account to be created ...
	I1124 13:48:24.476724  608917 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 13:48:24.479715  608917 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:24.479757  608917 system_pods.go:89] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:24.479765  608917 system_pods.go:89] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:24.479776  608917 system_pods.go:89] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:24.479782  608917 system_pods.go:89] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:24.479788  608917 system_pods.go:89] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:24.479793  608917 system_pods.go:89] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:24.479798  608917 system_pods.go:89] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:24.479806  608917 system_pods.go:89] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:24.479831  608917 retry.go:31] will retry after 257.034103ms: missing components: kube-dns
	I1124 13:48:24.740811  608917 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:24.740842  608917 system_pods.go:89] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:24.740848  608917 system_pods.go:89] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:24.740854  608917 system_pods.go:89] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:24.740858  608917 system_pods.go:89] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:24.740863  608917 system_pods.go:89] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:24.740866  608917 system_pods.go:89] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:24.740869  608917 system_pods.go:89] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:24.740876  608917 system_pods.go:89] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:24.740892  608917 retry.go:31] will retry after 244.335921ms: missing components: kube-dns
	I1124 13:48:24.989021  608917 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:24.989054  608917 system_pods.go:89] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:24.989061  608917 system_pods.go:89] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:24.989067  608917 system_pods.go:89] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:24.989072  608917 system_pods.go:89] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:24.989077  608917 system_pods.go:89] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:24.989080  608917 system_pods.go:89] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:24.989084  608917 system_pods.go:89] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:24.989089  608917 system_pods.go:89] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:24.989104  608917 retry.go:31] will retry after 431.238044ms: missing components: kube-dns
	I1124 13:48:22.686011  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:22.686450  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:22.686506  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:22.686563  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:22.718842  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:22.718868  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:22.718874  572647 cri.go:89] found id: ""
	I1124 13:48:22.718885  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:22.719025  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.724051  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.728627  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:22.728697  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:22.758279  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:22.758305  572647 cri.go:89] found id: ""
	I1124 13:48:22.758315  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:22.758378  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.762905  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:22.763025  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:22.796176  572647 cri.go:89] found id: ""
	I1124 13:48:22.796207  572647 logs.go:282] 0 containers: []
	W1124 13:48:22.796218  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:22.796227  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:22.796293  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:22.828770  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:22.828801  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:22.828815  572647 cri.go:89] found id: ""
	I1124 13:48:22.828827  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:22.828886  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.833530  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.837668  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:22.837750  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:22.867760  572647 cri.go:89] found id: ""
	I1124 13:48:22.867793  572647 logs.go:282] 0 containers: []
	W1124 13:48:22.867806  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:22.867815  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:22.867976  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:22.899275  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:22.899305  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:22.899312  572647 cri.go:89] found id: ""
	I1124 13:48:22.899327  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:22.899391  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.903859  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.908121  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:22.908190  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:22.938883  572647 cri.go:89] found id: ""
	I1124 13:48:22.938961  572647 logs.go:282] 0 containers: []
	W1124 13:48:22.938972  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:22.938980  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:22.939033  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:22.969840  572647 cri.go:89] found id: ""
	I1124 13:48:22.969864  572647 logs.go:282] 0 containers: []
	W1124 13:48:22.969872  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:22.969882  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:22.969903  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:23.031386  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:23.031411  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:23.031425  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:23.067770  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:23.067805  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:23.104851  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:23.104886  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:23.160621  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:23.160668  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:23.190994  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:23.191026  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:23.226509  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:23.226542  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:23.269082  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:23.269130  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:23.360572  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:23.360613  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:23.399049  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:23.399089  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:23.440241  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:23.440282  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:23.474172  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:23.474212  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:25.992569  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:25.993167  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:25.993241  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:25.993310  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:26.021789  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:26.021816  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:26.021823  572647 cri.go:89] found id: ""
	I1124 13:48:26.021834  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:26.021985  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.027084  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.031267  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:26.031350  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:26.063349  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:26.063379  572647 cri.go:89] found id: ""
	I1124 13:48:26.063390  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:26.063448  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.068064  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:26.068140  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:26.096106  572647 cri.go:89] found id: ""
	I1124 13:48:26.096148  572647 logs.go:282] 0 containers: []
	W1124 13:48:26.096158  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:26.096165  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:26.096220  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:26.126156  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:26.126186  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:26.126193  572647 cri.go:89] found id: ""
	I1124 13:48:26.126205  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:26.126275  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.131369  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.135595  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:26.135657  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:26.163133  572647 cri.go:89] found id: ""
	I1124 13:48:26.163161  572647 logs.go:282] 0 containers: []
	W1124 13:48:26.163169  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:26.163187  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:26.163244  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:26.192355  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:26.192378  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:26.192384  572647 cri.go:89] found id: ""
	I1124 13:48:26.192394  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:26.192549  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.197316  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:25.424597  608917 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:25.424631  608917 system_pods.go:89] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:25.424636  608917 system_pods.go:89] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:25.424642  608917 system_pods.go:89] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:25.424646  608917 system_pods.go:89] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:25.424650  608917 system_pods.go:89] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:25.424653  608917 system_pods.go:89] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:25.424656  608917 system_pods.go:89] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:25.424663  608917 system_pods.go:89] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:25.424679  608917 retry.go:31] will retry after 458.014987ms: missing components: kube-dns
	I1124 13:48:25.886603  608917 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:25.886633  608917 system_pods.go:89] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Running
	I1124 13:48:25.886641  608917 system_pods.go:89] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:25.886644  608917 system_pods.go:89] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:25.886649  608917 system_pods.go:89] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:25.886653  608917 system_pods.go:89] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:25.886657  608917 system_pods.go:89] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:25.886660  608917 system_pods.go:89] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:25.886663  608917 system_pods.go:89] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Running
	I1124 13:48:25.886671  608917 system_pods.go:126] duration metric: took 1.409940532s to wait for k8s-apps to be running ...
	I1124 13:48:25.886680  608917 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 13:48:25.886726  608917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:48:25.901294  608917 system_svc.go:56] duration metric: took 14.604723ms WaitForService to wait for kubelet
	I1124 13:48:25.901324  608917 kubeadm.go:587] duration metric: took 16.26302303s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:48:25.901343  608917 node_conditions.go:102] verifying NodePressure condition ...
	I1124 13:48:25.904190  608917 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 13:48:25.904219  608917 node_conditions.go:123] node cpu capacity is 8
	I1124 13:48:25.904234  608917 node_conditions.go:105] duration metric: took 2.88688ms to run NodePressure ...
	I1124 13:48:25.904249  608917 start.go:242] waiting for startup goroutines ...
	I1124 13:48:25.904256  608917 start.go:247] waiting for cluster config update ...
	I1124 13:48:25.904266  608917 start.go:256] writing updated cluster config ...
	I1124 13:48:25.904560  608917 ssh_runner.go:195] Run: rm -f paused
	I1124 13:48:25.909215  608917 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:48:25.912986  608917 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rcf8v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.917301  608917 pod_ready.go:94] pod "coredns-66bc5c9577-rcf8v" is "Ready"
	I1124 13:48:25.917324  608917 pod_ready.go:86] duration metric: took 4.297309ms for pod "coredns-66bc5c9577-rcf8v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.919442  608917 pod_ready.go:83] waiting for pod "etcd-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.923976  608917 pod_ready.go:94] pod "etcd-no-preload-608395" is "Ready"
	I1124 13:48:25.923999  608917 pod_ready.go:86] duration metric: took 4.535115ms for pod "etcd-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.926003  608917 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.930385  608917 pod_ready.go:94] pod "kube-apiserver-no-preload-608395" is "Ready"
	I1124 13:48:25.930413  608917 pod_ready.go:86] duration metric: took 4.382406ms for pod "kube-apiserver-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.932261  608917 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:26.313581  608917 pod_ready.go:94] pod "kube-controller-manager-no-preload-608395" is "Ready"
	I1124 13:48:26.313615  608917 pod_ready.go:86] duration metric: took 381.333887ms for pod "kube-controller-manager-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:26.514064  608917 pod_ready.go:83] waiting for pod "kube-proxy-5vj5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:26.913664  608917 pod_ready.go:94] pod "kube-proxy-5vj5p" is "Ready"
	I1124 13:48:26.913702  608917 pod_ready.go:86] duration metric: took 399.60223ms for pod "kube-proxy-5vj5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:27.114488  608917 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:27.514056  608917 pod_ready.go:94] pod "kube-scheduler-no-preload-608395" is "Ready"
	I1124 13:48:27.514084  608917 pod_ready.go:86] duration metric: took 399.56934ms for pod "kube-scheduler-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:27.514098  608917 pod_ready.go:40] duration metric: took 1.604847792s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:48:27.561310  608917 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 13:48:27.563544  608917 out.go:179] * Done! kubectl is now configured to use "no-preload-608395" cluster and "default" namespace by default
	I1124 13:48:26.202352  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:26.202439  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:26.231899  572647 cri.go:89] found id: ""
	I1124 13:48:26.231953  572647 logs.go:282] 0 containers: []
	W1124 13:48:26.231964  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:26.231973  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:26.232040  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:26.263417  572647 cri.go:89] found id: ""
	I1124 13:48:26.263446  572647 logs.go:282] 0 containers: []
	W1124 13:48:26.263459  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:26.263473  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:26.263488  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:26.354230  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:26.354265  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:26.389608  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:26.389652  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:26.427040  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:26.427077  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:26.466568  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:26.466603  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:26.503710  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:26.503749  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:26.539150  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:26.539193  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:26.583782  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:26.583825  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:26.617656  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:26.617696  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:26.634777  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:26.634809  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:26.693534  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:26.693559  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:26.693577  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:26.748627  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:26.748668  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:29.280171  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:29.280640  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:29.280694  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:29.280748  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:29.309613  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:29.309638  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:29.309644  572647 cri.go:89] found id: ""
	I1124 13:48:29.309660  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:29.309730  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.314623  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.319864  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:29.319962  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:29.348671  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:29.348699  572647 cri.go:89] found id: ""
	I1124 13:48:29.348709  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:29.348775  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.353662  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:29.353728  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:29.383017  572647 cri.go:89] found id: ""
	I1124 13:48:29.383046  572647 logs.go:282] 0 containers: []
	W1124 13:48:29.383058  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:29.383066  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:29.383121  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:29.411238  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:29.411259  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:29.411264  572647 cri.go:89] found id: ""
	I1124 13:48:29.411271  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:29.411325  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.415976  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.420189  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:29.420264  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:29.449856  572647 cri.go:89] found id: ""
	I1124 13:48:29.449890  572647 logs.go:282] 0 containers: []
	W1124 13:48:29.449921  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:29.449929  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:29.450001  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:29.480136  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:29.480164  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:29.480171  572647 cri.go:89] found id: ""
	I1124 13:48:29.480181  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:29.480258  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.484998  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.489433  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:29.489504  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:29.519804  572647 cri.go:89] found id: ""
	I1124 13:48:29.519841  572647 logs.go:282] 0 containers: []
	W1124 13:48:29.519854  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:29.519864  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:29.520048  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:29.549935  572647 cri.go:89] found id: ""
	I1124 13:48:29.549964  572647 logs.go:282] 0 containers: []
	W1124 13:48:29.549974  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:29.549986  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:29.549997  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:29.593521  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:29.593560  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:29.681751  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:29.681792  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:29.699198  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:29.699232  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:29.759823  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:29.759850  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:29.759863  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:29.798497  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:29.798534  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:29.835677  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:29.835718  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:29.864876  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:29.864923  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:29.898153  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:29.898186  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:29.932035  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:29.932073  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:29.971224  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:29.971258  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:30.026576  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:30.026619  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	b44a9a38266a3       56cc512116c8f       8 seconds ago       Running             busybox                   0                   91e7e42c593d0       busybox                                          default
	8d4a4dd9d6632       ead0a4a53df89       13 seconds ago      Running             coredns                   0                   1c930bc4d6523       coredns-5dd5756b68-b5rrl                         kube-system
	c9c8f51adb6bb       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   840fae773d68e       storage-provisioner                              kube-system
	1dab1df16e654       409467f978b4a       25 seconds ago      Running             kindnet-cni               0                   30a65fd13bcca       kindnet-tpjvb                                    kube-system
	0b87cfcc163e3       ea1030da44aa1       28 seconds ago      Running             kube-proxy                0                   555af9e11f935       kube-proxy-hzfcx                                 kube-system
	b89c098ff2cb6       bb5e0dde9054c       46 seconds ago      Running             kube-apiserver            0                   b832e9f75c0f1       kube-apiserver-old-k8s-version-513442            kube-system
	f7663d3953f0e       4be79c38a4bab       46 seconds ago      Running             kube-controller-manager   0                   06bb689695cce       kube-controller-manager-old-k8s-version-513442   kube-system
	bdd5c20173350       f6f496300a2ae       46 seconds ago      Running             kube-scheduler            0                   ac1efcdb81d0e       kube-scheduler-old-k8s-version-513442            kube-system
	5793c7fd11b5c       73deb9a3f7025       46 seconds ago      Running             etcd                      0                   3c4129b98c0d7       etcd-old-k8s-version-513442                      kube-system
	
	
	==> containerd <==
	Nov 24 13:48:19 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:19.636050137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-b5rrl,Uid:4e6c9b7c-5f0a-4c60-8197-20e985a07403,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c930bc4d6523dcc2ff99c9243131fcf23dfc7881b09c013bf55e68b23ecf25e\""
	Nov 24 13:48:19 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:19.639799945Z" level=info msg="CreateContainer within sandbox \"1c930bc4d6523dcc2ff99c9243131fcf23dfc7881b09c013bf55e68b23ecf25e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 24 13:48:19 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:19.648881001Z" level=info msg="Container 8d4a4dd9d6632a607a007a0e131e676696c4d059874b38cd47f762f53926ad89: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 13:48:19 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:19.657829357Z" level=info msg="CreateContainer within sandbox \"1c930bc4d6523dcc2ff99c9243131fcf23dfc7881b09c013bf55e68b23ecf25e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8d4a4dd9d6632a607a007a0e131e676696c4d059874b38cd47f762f53926ad89\""
	Nov 24 13:48:19 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:19.658662420Z" level=info msg="StartContainer for \"8d4a4dd9d6632a607a007a0e131e676696c4d059874b38cd47f762f53926ad89\""
	Nov 24 13:48:19 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:19.659800869Z" level=info msg="connecting to shim 8d4a4dd9d6632a607a007a0e131e676696c4d059874b38cd47f762f53926ad89" address="unix:///run/containerd/s/c69a9b00491bdefff20b5fba21aa1d556fb9c3a3bad974c8b8be870ca95e072b" protocol=ttrpc version=3
	Nov 24 13:48:19 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:19.704634320Z" level=info msg="StartContainer for \"c9c8f51adb6bbca8e0f954ad9082c0c66235dce129e152dd682ab69622b44aac\" returns successfully"
	Nov 24 13:48:19 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:19.716701551Z" level=info msg="StartContainer for \"8d4a4dd9d6632a607a007a0e131e676696c4d059874b38cd47f762f53926ad89\" returns successfully"
	Nov 24 13:48:22 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:22.659740340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:e21ee73b-578f-48c9-826d-ab3b4bbb7871,Namespace:default,Attempt:0,}"
	Nov 24 13:48:22 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:22.705643585Z" level=info msg="connecting to shim 91e7e42c593d0f49381ba051fa95a3bffc3c2fedf4ee572f1ee3e65a03cebfff" address="unix:///run/containerd/s/a6973921fa6bbb987fab0736637648be3dc3e077c5046184370bd0c127ef00c4" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 13:48:22 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:22.781316455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:e21ee73b-578f-48c9-826d-ab3b4bbb7871,Namespace:default,Attempt:0,} returns sandbox id \"91e7e42c593d0f49381ba051fa95a3bffc3c2fedf4ee572f1ee3e65a03cebfff\""
	Nov 24 13:48:22 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:22.783364521Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.550927147Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.551949670Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396647"
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.553332639Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.555518804Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.555999909Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.772594905s"
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.556037581Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.557958127Z" level=info msg="CreateContainer within sandbox \"91e7e42c593d0f49381ba051fa95a3bffc3c2fedf4ee572f1ee3e65a03cebfff\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.566156418Z" level=info msg="Container b44a9a38266a36367dda4e29d517101d0bad25018140ed3049b32babe692f605: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.572811164Z" level=info msg="CreateContainer within sandbox \"91e7e42c593d0f49381ba051fa95a3bffc3c2fedf4ee572f1ee3e65a03cebfff\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"b44a9a38266a36367dda4e29d517101d0bad25018140ed3049b32babe692f605\""
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.573543998Z" level=info msg="StartContainer for \"b44a9a38266a36367dda4e29d517101d0bad25018140ed3049b32babe692f605\""
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.574401159Z" level=info msg="connecting to shim b44a9a38266a36367dda4e29d517101d0bad25018140ed3049b32babe692f605" address="unix:///run/containerd/s/a6973921fa6bbb987fab0736637648be3dc3e077c5046184370bd0c127ef00c4" protocol=ttrpc version=3
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.628848926Z" level=info msg="StartContainer for \"b44a9a38266a36367dda4e29d517101d0bad25018140ed3049b32babe692f605\" returns successfully"
	Nov 24 13:48:32 old-k8s-version-513442 containerd[663]: E1124 13:48:32.433506     663 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [8d4a4dd9d6632a607a007a0e131e676696c4d059874b38cd47f762f53926ad89] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57003 - 26434 "HINFO IN 1735205229727733014.6660763770011463869. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021751094s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-513442
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-513442
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=old-k8s-version-513442
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_47_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:47:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-513442
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:48:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:48:22 +0000   Mon, 24 Nov 2025 13:47:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:48:22 +0000   Mon, 24 Nov 2025 13:47:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:48:22 +0000   Mon, 24 Nov 2025 13:47:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:48:22 +0000   Mon, 24 Nov 2025 13:48:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-513442
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                7bc159f8-7fe0-4f8d-82dc-0cc733a1645b
	  Boot ID:                    715d4626-373f-499b-b5de-b6d832ce4fe4
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-b5rrl                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-old-k8s-version-513442                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         43s
	  kube-system                 kindnet-tpjvb                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-513442             250m (3%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-513442    200m (2%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-hzfcx                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-513442             100m (1%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  Starting                 48s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  47s (x8 over 48s)  kubelet          Node old-k8s-version-513442 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x8 over 48s)  kubelet          Node old-k8s-version-513442 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x7 over 48s)  kubelet          Node old-k8s-version-513442 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  47s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  42s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-513442 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-513442 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-513442 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node old-k8s-version-513442 event: Registered Node old-k8s-version-513442 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-513442 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 91 30 bc 58 af 08 06
	[Nov24 12:45] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a fb 84 7d 9e 9e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0a 91 30 bc 58 af 08 06
	[ +25.292047] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff da 14 b4 9b 3e 8f 08 06
	[  +0.024207] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 06 8e 71 0b 76 c3 08 06
	[ +16.768103] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 45 b6 ad fe 93 08 06
	[  +5.950770] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e b5 4a 70 0a 35 08 06
	[Nov24 12:46] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e 8b d0 4a da 7e 08 06
	[  +0.000557] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e b5 4a 70 0a 35 08 06
	[  +1.903671] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 1f e8 fc 59 74 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da 14 b4 9b 3e 8f 08 06
	[ +17.535584] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 31 ec 7c 1d 38 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 45 b6 ad fe 93 08 06
	
	
	==> etcd [5793c7fd11b5c568735219e3d193c67360dde88032a438ae332a3e12d7fdf0a5] <==
	{"level":"info","ts":"2025-11-24T13:47:46.896061Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-11-24T13:47:47.18298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-24T13:47:47.183032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-24T13:47:47.183064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-11-24T13:47:47.183082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-11-24T13:47:47.18309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-24T13:47:47.183102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-11-24T13:47:47.183112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-24T13:47:47.184166Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-513442 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T13:47:47.184441Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T13:47:47.184423Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T13:47:47.184639Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T13:47:47.184677Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T13:47:47.184697Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T13:47:47.185356Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T13:47:47.185462Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T13:47:47.185485Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T13:47:47.186127Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-11-24T13:47:47.186272Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-24T13:48:02.673385Z","caller":"traceutil/trace.go:171","msg":"trace[456960560] linearizableReadLoop","detail":"{readStateIndex:331; appliedIndex:330; }","duration":"136.421105ms","start":"2025-11-24T13:48:02.536946Z","end":"2025-11-24T13:48:02.673367Z","steps":["trace[456960560] 'read index received'  (duration: 136.248358ms)","trace[456960560] 'applied index is now lower than readState.Index'  (duration: 171.987µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T13:48:02.673673Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.721804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-11-24T13:48:02.67373Z","caller":"traceutil/trace.go:171","msg":"trace[286257082] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:319; }","duration":"136.809717ms","start":"2025-11-24T13:48:02.536907Z","end":"2025-11-24T13:48:02.673717Z","steps":["trace[286257082] 'agreement among raft nodes before linearized reading'  (duration: 136.690513ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:48:02.673851Z","caller":"traceutil/trace.go:171","msg":"trace[2009156990] transaction","detail":"{read_only:false; response_revision:319; number_of_response:1; }","duration":"168.350659ms","start":"2025-11-24T13:48:02.505481Z","end":"2025-11-24T13:48:02.673832Z","steps":["trace[2009156990] 'process raft request'  (duration: 167.775897ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T13:48:02.673811Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.836489ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T13:48:02.673892Z","caller":"traceutil/trace.go:171","msg":"trace[1422014017] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:319; }","duration":"132.929171ms","start":"2025-11-24T13:48:02.54095Z","end":"2025-11-24T13:48:02.673879Z","steps":["trace[1422014017] 'agreement among raft nodes before linearized reading'  (duration: 132.804065ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:48:33 up  2:30,  0 user,  load average: 2.03, 2.80, 1.92
	Linux old-k8s-version-513442 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1dab1df16e654e8d2bf5248f41d4e61a9922afd9e9aa99eb10b51ff76d83fd27] <==
	I1124 13:48:08.805828       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 13:48:08.806157       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 13:48:08.806325       1 main.go:148] setting mtu 1500 for CNI 
	I1124 13:48:08.806347       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 13:48:08.806366       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T13:48:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 13:48:09.065201       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 13:48:09.065237       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 13:48:09.065250       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 13:48:09.205219       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 13:48:09.465641       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 13:48:09.465667       1 metrics.go:72] Registering metrics
	I1124 13:48:09.465726       1 controller.go:711] "Syncing nftables rules"
	I1124 13:48:19.068504       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:48:19.068576       1 main.go:301] handling current node
	I1124 13:48:29.065440       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:48:29.065473       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b89c098ff2cb630c37cf57f5061688d52a419284b629da3305843a9dee1a5dbb] <==
	I1124 13:47:48.951700       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 13:47:48.951970       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1124 13:47:48.951984       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1124 13:47:48.952108       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1124 13:47:48.952141       1 aggregator.go:166] initial CRD sync complete...
	I1124 13:47:48.952149       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 13:47:48.952156       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 13:47:48.952165       1 cache.go:39] Caches are synced for autoregister controller
	I1124 13:47:48.953986       1 controller.go:624] quota admission added evaluator for: namespaces
	I1124 13:47:49.152644       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 13:47:49.858204       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 13:47:49.862657       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 13:47:49.862682       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 13:47:50.422560       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 13:47:50.472548       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 13:47:50.570004       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 13:47:50.579741       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1124 13:47:50.580884       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 13:47:50.586999       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 13:47:50.885484       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 13:47:51.864040       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 13:47:51.877619       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 13:47:51.890804       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1124 13:48:04.597347       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1124 13:48:04.651565       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [f7663d3953f0ee1aca9b8f557f4e81791e15502a0a6447b494d2035c4c9b2dfc] <==
	I1124 13:48:03.884906       1 shared_informer.go:318] Caches are synced for deployment
	I1124 13:48:03.932363       1 shared_informer.go:318] Caches are synced for resource quota
	I1124 13:48:03.941297       1 shared_informer.go:318] Caches are synced for resource quota
	I1124 13:48:04.243318       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 13:48:04.243355       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 13:48:04.258877       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 13:48:04.607851       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hzfcx"
	I1124 13:48:04.611600       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tpjvb"
	I1124 13:48:04.656277       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1124 13:48:04.748220       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-bcd4m"
	I1124 13:48:04.756616       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-b5rrl"
	I1124 13:48:04.767398       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="111.018323ms"
	I1124 13:48:04.782835       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.361034ms"
	I1124 13:48:04.782967       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.68µs"
	I1124 13:48:04.940856       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1124 13:48:04.951934       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-bcd4m"
	I1124 13:48:04.962829       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.807545ms"
	I1124 13:48:04.970616       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.726674ms"
	I1124 13:48:04.970784       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.42µs"
	I1124 13:48:19.202453       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="121.753µs"
	I1124 13:48:19.220547       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.147µs"
	I1124 13:48:20.044339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="114.847µs"
	I1124 13:48:20.080458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.482374ms"
	I1124 13:48:20.080575       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.63µs"
	I1124 13:48:23.770117       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [0b87cfcc163e379c4e72aa8c64739d9d13a801c140b5fabe7cbbc11022cfd12a] <==
	I1124 13:48:05.277959       1 server_others.go:69] "Using iptables proxy"
	I1124 13:48:05.288147       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1124 13:48:05.312455       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:48:05.315014       1 server_others.go:152] "Using iptables Proxier"
	I1124 13:48:05.315055       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 13:48:05.315064       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 13:48:05.315106       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 13:48:05.315978       1 server.go:846] "Version info" version="v1.28.0"
	I1124 13:48:05.316072       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:48:05.317668       1 config.go:188] "Starting service config controller"
	I1124 13:48:05.317713       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 13:48:05.317754       1 config.go:315] "Starting node config controller"
	I1124 13:48:05.317762       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 13:48:05.318091       1 config.go:97] "Starting endpoint slice config controller"
	I1124 13:48:05.318114       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 13:48:05.418055       1 shared_informer.go:318] Caches are synced for service config
	I1124 13:48:05.418104       1 shared_informer.go:318] Caches are synced for node config
	I1124 13:48:05.419230       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [bdd5c20173350449ff23a9ee9a791fe034c518afc7784448209ad9b0a5c32a9f] <==
	W1124 13:47:49.773882       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1124 13:47:49.773941       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 13:47:49.817194       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1124 13:47:49.817241       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1124 13:47:49.898465       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1124 13:47:49.898514       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1124 13:47:49.973231       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1124 13:47:49.973807       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1124 13:47:49.975515       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1124 13:47:49.975624       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1124 13:47:50.044243       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1124 13:47:50.044284       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1124 13:47:50.065787       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1124 13:47:50.065828       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1124 13:47:50.067051       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1124 13:47:50.067084       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1124 13:47:50.088454       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1124 13:47:50.088492       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1124 13:47:50.094062       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1124 13:47:50.094103       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1124 13:47:50.176377       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1124 13:47:50.176425       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1124 13:47:50.188050       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1124 13:47:50.188094       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1124 13:47:51.410574       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 13:48:03 old-k8s-version-513442 kubelet[1521]: I1124 13:48:03.736815    1521 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 13:48:04 old-k8s-version-513442 kubelet[1521]: I1124 13:48:04.621236    1521 topology_manager.go:215] "Topology Admit Handler" podUID="f4ba208a-1a78-46ae-9684-ff3309400852" podNamespace="kube-system" podName="kube-proxy-hzfcx"
	Nov 24 13:48:04 old-k8s-version-513442 kubelet[1521]: I1124 13:48:04.628198    1521 topology_manager.go:215] "Topology Admit Handler" podUID="c7df115a-8394-4f80-ac6c-5b1fc95337b5" podNamespace="kube-system" podName="kindnet-tpjvb"
	Nov 24 13:48:04 old-k8s-version-513442 kubelet[1521]: I1124 13:48:04.701758    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7df115a-8394-4f80-ac6c-5b1fc95337b5-xtables-lock\") pod \"kindnet-tpjvb\" (UID: \"c7df115a-8394-4f80-ac6c-5b1fc95337b5\") " pod="kube-system/kindnet-tpjvb"
	Nov 24 13:48:04 old-k8s-version-513442 kubelet[1521]: I1124 13:48:04.702003    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cdcx\" (UniqueName: \"kubernetes.io/projected/f4ba208a-1a78-46ae-9684-ff3309400852-kube-api-access-6cdcx\") pod \"kube-proxy-hzfcx\" (UID: \"f4ba208a-1a78-46ae-9684-ff3309400852\") " pod="kube-system/kube-proxy-hzfcx"
	Nov 24 13:48:04 old-k8s-version-513442 kubelet[1521]: I1124 13:48:04.702157    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c7df115a-8394-4f80-ac6c-5b1fc95337b5-cni-cfg\") pod \"kindnet-tpjvb\" (UID: \"c7df115a-8394-4f80-ac6c-5b1fc95337b5\") " pod="kube-system/kindnet-tpjvb"
	Nov 24 13:48:04 old-k8s-version-513442 kubelet[1521]: I1124 13:48:04.702290    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7df115a-8394-4f80-ac6c-5b1fc95337b5-lib-modules\") pod \"kindnet-tpjvb\" (UID: \"c7df115a-8394-4f80-ac6c-5b1fc95337b5\") " pod="kube-system/kindnet-tpjvb"
	Nov 24 13:48:04 old-k8s-version-513442 kubelet[1521]: I1124 13:48:04.702379    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnddq\" (UniqueName: \"kubernetes.io/projected/c7df115a-8394-4f80-ac6c-5b1fc95337b5-kube-api-access-cnddq\") pod \"kindnet-tpjvb\" (UID: \"c7df115a-8394-4f80-ac6c-5b1fc95337b5\") " pod="kube-system/kindnet-tpjvb"
	Nov 24 13:48:04 old-k8s-version-513442 kubelet[1521]: I1124 13:48:04.702452    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f4ba208a-1a78-46ae-9684-ff3309400852-kube-proxy\") pod \"kube-proxy-hzfcx\" (UID: \"f4ba208a-1a78-46ae-9684-ff3309400852\") " pod="kube-system/kube-proxy-hzfcx"
	Nov 24 13:48:04 old-k8s-version-513442 kubelet[1521]: I1124 13:48:04.702483    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4ba208a-1a78-46ae-9684-ff3309400852-xtables-lock\") pod \"kube-proxy-hzfcx\" (UID: \"f4ba208a-1a78-46ae-9684-ff3309400852\") " pod="kube-system/kube-proxy-hzfcx"
	Nov 24 13:48:04 old-k8s-version-513442 kubelet[1521]: I1124 13:48:04.702513    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4ba208a-1a78-46ae-9684-ff3309400852-lib-modules\") pod \"kube-proxy-hzfcx\" (UID: \"f4ba208a-1a78-46ae-9684-ff3309400852\") " pod="kube-system/kube-proxy-hzfcx"
	Nov 24 13:48:06 old-k8s-version-513442 kubelet[1521]: I1124 13:48:06.009542    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hzfcx" podStartSLOduration=2.00948849 podCreationTimestamp="2025-11-24 13:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:48:06.009255456 +0000 UTC m=+14.175181609" watchObservedRunningTime="2025-11-24 13:48:06.00948849 +0000 UTC m=+14.175414641"
	Nov 24 13:48:09 old-k8s-version-513442 kubelet[1521]: I1124 13:48:09.017801    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-tpjvb" podStartSLOduration=2.028995374 podCreationTimestamp="2025-11-24 13:48:04 +0000 UTC" firstStartedPulling="2025-11-24 13:48:05.423030434 +0000 UTC m=+13.588956573" lastFinishedPulling="2025-11-24 13:48:08.411777827 +0000 UTC m=+16.577703968" observedRunningTime="2025-11-24 13:48:09.017454231 +0000 UTC m=+17.183380385" watchObservedRunningTime="2025-11-24 13:48:09.017742769 +0000 UTC m=+17.183668923"
	Nov 24 13:48:19 old-k8s-version-513442 kubelet[1521]: I1124 13:48:19.126026    1521 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 24 13:48:19 old-k8s-version-513442 kubelet[1521]: I1124 13:48:19.199313    1521 topology_manager.go:215] "Topology Admit Handler" podUID="65efb270-100a-4e7c-bee8-24de1df28586" podNamespace="kube-system" podName="storage-provisioner"
	Nov 24 13:48:19 old-k8s-version-513442 kubelet[1521]: I1124 13:48:19.202110    1521 topology_manager.go:215] "Topology Admit Handler" podUID="4e6c9b7c-5f0a-4c60-8197-20e985a07403" podNamespace="kube-system" podName="coredns-5dd5756b68-b5rrl"
	Nov 24 13:48:19 old-k8s-version-513442 kubelet[1521]: I1124 13:48:19.296963    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84ccn\" (UniqueName: \"kubernetes.io/projected/65efb270-100a-4e7c-bee8-24de1df28586-kube-api-access-84ccn\") pod \"storage-provisioner\" (UID: \"65efb270-100a-4e7c-bee8-24de1df28586\") " pod="kube-system/storage-provisioner"
	Nov 24 13:48:19 old-k8s-version-513442 kubelet[1521]: I1124 13:48:19.297219    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/65efb270-100a-4e7c-bee8-24de1df28586-tmp\") pod \"storage-provisioner\" (UID: \"65efb270-100a-4e7c-bee8-24de1df28586\") " pod="kube-system/storage-provisioner"
	Nov 24 13:48:19 old-k8s-version-513442 kubelet[1521]: I1124 13:48:19.297296    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj4xm\" (UniqueName: \"kubernetes.io/projected/4e6c9b7c-5f0a-4c60-8197-20e985a07403-kube-api-access-sj4xm\") pod \"coredns-5dd5756b68-b5rrl\" (UID: \"4e6c9b7c-5f0a-4c60-8197-20e985a07403\") " pod="kube-system/coredns-5dd5756b68-b5rrl"
	Nov 24 13:48:19 old-k8s-version-513442 kubelet[1521]: I1124 13:48:19.297327    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e6c9b7c-5f0a-4c60-8197-20e985a07403-config-volume\") pod \"coredns-5dd5756b68-b5rrl\" (UID: \"4e6c9b7c-5f0a-4c60-8197-20e985a07403\") " pod="kube-system/coredns-5dd5756b68-b5rrl"
	Nov 24 13:48:20 old-k8s-version-513442 kubelet[1521]: I1124 13:48:20.055454    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-b5rrl" podStartSLOduration=16.055384325 podCreationTimestamp="2025-11-24 13:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:48:20.043996165 +0000 UTC m=+28.209922315" watchObservedRunningTime="2025-11-24 13:48:20.055384325 +0000 UTC m=+28.221310494"
	Nov 24 13:48:20 old-k8s-version-513442 kubelet[1521]: I1124 13:48:20.072835    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.072769008 podCreationTimestamp="2025-11-24 13:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:48:20.05633827 +0000 UTC m=+28.222264421" watchObservedRunningTime="2025-11-24 13:48:20.072769008 +0000 UTC m=+28.238695171"
	Nov 24 13:48:22 old-k8s-version-513442 kubelet[1521]: I1124 13:48:22.349894    1521 topology_manager.go:215] "Topology Admit Handler" podUID="e21ee73b-578f-48c9-826d-ab3b4bbb7871" podNamespace="default" podName="busybox"
	Nov 24 13:48:22 old-k8s-version-513442 kubelet[1521]: I1124 13:48:22.417169    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmgg8\" (UniqueName: \"kubernetes.io/projected/e21ee73b-578f-48c9-826d-ab3b4bbb7871-kube-api-access-mmgg8\") pod \"busybox\" (UID: \"e21ee73b-578f-48c9-826d-ab3b4bbb7871\") " pod="default/busybox"
	Nov 24 13:48:26 old-k8s-version-513442 kubelet[1521]: I1124 13:48:26.061183    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.287793929 podCreationTimestamp="2025-11-24 13:48:22 +0000 UTC" firstStartedPulling="2025-11-24 13:48:22.783005961 +0000 UTC m=+30.948932098" lastFinishedPulling="2025-11-24 13:48:25.556333595 +0000 UTC m=+33.722259740" observedRunningTime="2025-11-24 13:48:26.061015161 +0000 UTC m=+34.226941311" watchObservedRunningTime="2025-11-24 13:48:26.061121571 +0000 UTC m=+34.227047722"
	
	
	==> storage-provisioner [c9c8f51adb6bbca8e0f954ad9082c0c66235dce129e152dd682ab69622b44aac] <==
	I1124 13:48:19.713946       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 13:48:19.725060       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 13:48:19.725122       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 13:48:19.732798       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 13:48:19.733028       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-513442_df294b40-30a6-4b8c-83ff-3d897f2504d8!
	I1124 13:48:19.733030       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"938f90ea-7103-4290-984c-f5e7c1aae849", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-513442_df294b40-30a6-4b8c-83ff-3d897f2504d8 became leader
	I1124 13:48:19.833675       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-513442_df294b40-30a6-4b8c-83ff-3d897f2504d8!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-513442 -n old-k8s-version-513442
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-513442 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-513442
helpers_test.go:243: (dbg) docker inspect old-k8s-version-513442:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13426d2cf76c27dd9f2a390d750a5229384c014f5a7850e15adbf074b454afbc",
	        "Created": "2025-11-24T13:47:35.092444426Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 609088,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:47:35.135903717Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/13426d2cf76c27dd9f2a390d750a5229384c014f5a7850e15adbf074b454afbc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13426d2cf76c27dd9f2a390d750a5229384c014f5a7850e15adbf074b454afbc/hostname",
	        "HostsPath": "/var/lib/docker/containers/13426d2cf76c27dd9f2a390d750a5229384c014f5a7850e15adbf074b454afbc/hosts",
	        "LogPath": "/var/lib/docker/containers/13426d2cf76c27dd9f2a390d750a5229384c014f5a7850e15adbf074b454afbc/13426d2cf76c27dd9f2a390d750a5229384c014f5a7850e15adbf074b454afbc-json.log",
	        "Name": "/old-k8s-version-513442",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-513442:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-513442",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "13426d2cf76c27dd9f2a390d750a5229384c014f5a7850e15adbf074b454afbc",
	                "LowerDir": "/var/lib/docker/overlay2/bd85d41ae72067109a66add256d4bca169e9772c5d88f4cadf18fe98e5e00338-init/diff:/var/lib/docker/overlay2/0f013e03fd0eaee4efc608fb0376e7d3e8ba628388f5191310c2259ab273ad26/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bd85d41ae72067109a66add256d4bca169e9772c5d88f4cadf18fe98e5e00338/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bd85d41ae72067109a66add256d4bca169e9772c5d88f4cadf18fe98e5e00338/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bd85d41ae72067109a66add256d4bca169e9772c5d88f4cadf18fe98e5e00338/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-513442",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-513442/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-513442",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-513442",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-513442",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "712b075dd23c6c1fbc5bbaa3b37767187ba4a40be8134789ce23d7e72a4abc25",
	            "SandboxKey": "/var/run/docker/netns/712b075dd23c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-513442": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "57f535f2d59b940a7e2130a9a6bcf664e3f052e878c97575bfeea5e13ed58e73",
	                    "EndpointID": "439facefab95f9d1822733d1b1004570b6d417a88dc9a1ee26ae6d774889308f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "46:21:b5:12:37:e7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-513442",
	                        "13426d2cf76c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-513442 -n old-k8s-version-513442
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-513442 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-513442 logs -n 25: (1.184466791s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-355661 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ ssh     │ -p cilium-355661 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ ssh     │ -p cilium-355661 sudo containerd config dump                                                                                                                                                                                                        │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ ssh     │ -p cilium-355661 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ ssh     │ -p cilium-355661 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ start   │ -p NoKubernetes-787855 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                         │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │ 24 Nov 25 13:46 UTC │
	│ ssh     │ -p cilium-355661 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ ssh     │ -p cilium-355661 sudo crio config                                                                                                                                                                                                                   │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ delete  │ -p cilium-355661                                                                                                                                                                                                                                    │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │ 24 Nov 25 13:46 UTC │
	│ start   │ -p force-systemd-flag-775412 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-775412 │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │ 24 Nov 25 13:47 UTC │
	│ delete  │ -p NoKubernetes-787855                                                                                                                                                                                                                              │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │ 24 Nov 25 13:46 UTC │
	│ start   │ -p NoKubernetes-787855 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                         │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │ 24 Nov 25 13:47 UTC │
	│ ssh     │ force-systemd-flag-775412 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-775412 │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ delete  │ -p force-systemd-flag-775412                                                                                                                                                                                                                        │ force-systemd-flag-775412 │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ ssh     │ -p NoKubernetes-787855 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │                     │
	│ start   │ -p cert-options-342221 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-342221       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ stop    │ -p NoKubernetes-787855                                                                                                                                                                                                                              │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ start   │ -p NoKubernetes-787855 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ ssh     │ cert-options-342221 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-342221       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ ssh     │ -p cert-options-342221 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-342221       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ delete  │ -p cert-options-342221                                                                                                                                                                                                                              │ cert-options-342221       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ start   │ -p old-k8s-version-513442 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-513442    │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:48 UTC │
	│ ssh     │ -p NoKubernetes-787855 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │                     │
	│ delete  │ -p NoKubernetes-787855                                                                                                                                                                                                                              │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ start   │ -p no-preload-608395 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-608395         │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:48 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:47:35
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:47:35.072446  608917 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:47:35.072749  608917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:47:35.072763  608917 out.go:374] Setting ErrFile to fd 2...
	I1124 13:47:35.072768  608917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:47:35.073046  608917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
	I1124 13:47:35.073526  608917 out.go:368] Setting JSON to false
	I1124 13:47:35.074857  608917 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8994,"bootTime":1763983061,"procs":340,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:47:35.074959  608917 start.go:143] virtualization: kvm guest
	I1124 13:47:35.077490  608917 out.go:179] * [no-preload-608395] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:47:35.079255  608917 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:47:35.079255  608917 notify.go:221] Checking for updates...
	I1124 13:47:35.080776  608917 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:47:35.082396  608917 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:47:35.083932  608917 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-370498/.minikube
	I1124 13:47:35.085251  608917 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:47:35.086603  608917 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:47:35.089427  608917 config.go:182] Loaded profile config "cert-expiration-099863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:47:35.089575  608917 config.go:182] Loaded profile config "kubernetes-upgrade-358357": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:47:35.089706  608917 config.go:182] Loaded profile config "old-k8s-version-513442": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 13:47:35.089837  608917 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:47:35.114581  608917 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:47:35.114769  608917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:47:35.180508  608917 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:58 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 13:47:35.169616068 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:47:35.180627  608917 docker.go:319] overlay module found
	I1124 13:47:35.182258  608917 out.go:179] * Using the docker driver based on user configuration
	I1124 13:47:35.183642  608917 start.go:309] selected driver: docker
	I1124 13:47:35.183663  608917 start.go:927] validating driver "docker" against <nil>
	I1124 13:47:35.183675  608917 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:47:35.184437  608917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:47:35.249663  608917 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:58 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 13:47:35.237755455 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:47:35.249975  608917 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:47:35.250402  608917 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:47:35.252318  608917 out.go:179] * Using Docker driver with root privileges
	I1124 13:47:35.254354  608917 cni.go:84] Creating CNI manager for ""
	I1124 13:47:35.254446  608917 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:47:35.254457  608917 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:47:35.254652  608917 start.go:353] cluster config:
	{Name:no-preload-608395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-608395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:47:35.256201  608917 out.go:179] * Starting "no-preload-608395" primary control-plane node in "no-preload-608395" cluster
	I1124 13:47:35.257392  608917 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 13:47:35.258857  608917 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:47:35.260330  608917 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 13:47:35.260404  608917 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:47:35.260496  608917 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/config.json ...
	I1124 13:47:35.260537  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/config.json: {Name:mk2f4d5eff7070dcec35f39f30e01cd0b3fcce8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:35.260546  608917 cache.go:107] acquiring lock: {Name:mk28ec677a69a6f418643b8b89331fa25b8c42f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260546  608917 cache.go:107] acquiring lock: {Name:mkad3cbb6fa2e7f41e4d7c0e1e3c74156dc55521 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260557  608917 cache.go:107] acquiring lock: {Name:mk7aef7fc4ff6e4e4541fdeb1d5e26c13a66856b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260584  608917 cache.go:107] acquiring lock: {Name:mk586ecbe7f4b4aab48f8ad28d0d7b1848898c9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260604  608917 cache.go:107] acquiring lock: {Name:mkf548ea8c9721a4e4ad1e37073c3deea8530810 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260622  608917 cache.go:107] acquiring lock: {Name:mk1ce266bd6b9003a6a371facbc84809dce0c3c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260651  608917 cache.go:107] acquiring lock: {Name:mk687b2dcc146d43e9d607f472f2f08a2307baed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260663  608917 cache.go:107] acquiring lock: {Name:mk4b559f0fdae6e96edea26981618bf8d9d50b2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260712  608917 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:35.260755  608917 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:35.260801  608917 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:35.260819  608917 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:35.260852  608917 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:35.260858  608917 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 13:47:35.260727  608917 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:35.261039  608917 cache.go:115] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 13:47:35.261050  608917 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 523.955µs
	I1124 13:47:35.261069  608917 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 13:47:35.262249  608917 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:35.262277  608917 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:35.262359  608917 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:35.262407  608917 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 13:47:35.262461  608917 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:35.262522  608917 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:35.262735  608917 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:35.285963  608917 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 13:47:35.285989  608917 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 13:47:35.286014  608917 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:47:35.286057  608917 start.go:360] acquireMachinesLock for no-preload-608395: {Name:mkc9d1cf0cec9be2b369f1e47c690fc0399e88e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.286191  608917 start.go:364] duration metric: took 102.178µs to acquireMachinesLock for "no-preload-608395"
	I1124 13:47:35.286224  608917 start.go:93] Provisioning new machine with config: &{Name:no-preload-608395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-608395 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 13:47:35.286330  608917 start.go:125] createHost starting for "" (driver="docker")
	I1124 13:47:30.558317  607669 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 13:47:30.558626  607669 start.go:159] libmachine.API.Create for "old-k8s-version-513442" (driver="docker")
	I1124 13:47:30.558656  607669 client.go:173] LocalClient.Create starting
	I1124 13:47:30.558725  607669 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem
	I1124 13:47:30.558754  607669 main.go:143] libmachine: Decoding PEM data...
	I1124 13:47:30.558772  607669 main.go:143] libmachine: Parsing certificate...
	I1124 13:47:30.558826  607669 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem
	I1124 13:47:30.558849  607669 main.go:143] libmachine: Decoding PEM data...
	I1124 13:47:30.558860  607669 main.go:143] libmachine: Parsing certificate...
	I1124 13:47:30.559212  607669 cli_runner.go:164] Run: docker network inspect old-k8s-version-513442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 13:47:30.577139  607669 cli_runner.go:211] docker network inspect old-k8s-version-513442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 13:47:30.577245  607669 network_create.go:284] running [docker network inspect old-k8s-version-513442] to gather additional debugging logs...
	I1124 13:47:30.577276  607669 cli_runner.go:164] Run: docker network inspect old-k8s-version-513442
	W1124 13:47:30.593786  607669 cli_runner.go:211] docker network inspect old-k8s-version-513442 returned with exit code 1
	I1124 13:47:30.593826  607669 network_create.go:287] error running [docker network inspect old-k8s-version-513442]: docker network inspect old-k8s-version-513442: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-513442 not found
	I1124 13:47:30.593854  607669 network_create.go:289] output of [docker network inspect old-k8s-version-513442]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-513442 not found
	
	** /stderr **
	I1124 13:47:30.594026  607669 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:47:30.613315  607669 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8afb578efdfa IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:5e:46:43:aa:fe} reservation:<nil>}
	I1124 13:47:30.614364  607669 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ca3a55f53176 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:98:62:4c:91:8f} reservation:<nil>}
	I1124 13:47:30.614827  607669 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e11236ccf9ba IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:36:3b:80:be:95:34} reservation:<nil>}
	I1124 13:47:30.615410  607669 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-35b7bf6fd97a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5a:12:4e:d4:19:26} reservation:<nil>}
	I1124 13:47:30.616018  607669 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-1f5932eecbe7 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:aa:ff:d3:cd:de:0f} reservation:<nil>}
	I1124 13:47:30.617269  607669 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e7fa00}
	I1124 13:47:30.617308  607669 network_create.go:124] attempt to create docker network old-k8s-version-513442 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1124 13:47:30.617398  607669 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-513442 old-k8s-version-513442
	I1124 13:47:30.671102  607669 network_create.go:108] docker network old-k8s-version-513442 192.168.94.0/24 created
	I1124 13:47:30.671150  607669 kic.go:121] calculated static IP "192.168.94.2" for the "old-k8s-version-513442" container
	I1124 13:47:30.671218  607669 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 13:47:30.689078  607669 cli_runner.go:164] Run: docker volume create old-k8s-version-513442 --label name.minikube.sigs.k8s.io=old-k8s-version-513442 --label created_by.minikube.sigs.k8s.io=true
	I1124 13:47:30.709312  607669 oci.go:103] Successfully created a docker volume old-k8s-version-513442
	I1124 13:47:30.709408  607669 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-513442-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-513442 --entrypoint /usr/bin/test -v old-k8s-version-513442:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 13:47:31.132905  607669 oci.go:107] Successfully prepared a docker volume old-k8s-version-513442
	I1124 13:47:31.132980  607669 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 13:47:31.132992  607669 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 13:47:31.133075  607669 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-513442:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 13:47:35.011677  607669 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-513442:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (3.878547269s)
	I1124 13:47:35.011716  607669 kic.go:203] duration metric: took 3.878721361s to extract preloaded images to volume ...
	W1124 13:47:35.011796  607669 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 13:47:35.011829  607669 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 13:47:35.011871  607669 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:47:35.073961  607669 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-513442 --name old-k8s-version-513442 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-513442 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-513442 --network old-k8s-version-513442 --ip 192.168.94.2 --volume old-k8s-version-513442:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:47:32.801968  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:47:32.802485  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:47:32.802542  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:47:32.802595  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:47:32.832902  572647 cri.go:89] found id: "6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:32.832956  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:32.832963  572647 cri.go:89] found id: ""
	I1124 13:47:32.832972  572647 logs.go:282] 2 containers: [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:47:32.833038  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:32.837621  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:32.841927  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:47:32.842013  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:47:32.877193  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:32.877214  572647 cri.go:89] found id: ""
	I1124 13:47:32.877223  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:47:32.877290  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:32.882239  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:47:32.882329  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:47:32.912677  572647 cri.go:89] found id: ""
	I1124 13:47:32.912709  572647 logs.go:282] 0 containers: []
	W1124 13:47:32.912727  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:47:32.912735  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:47:32.912799  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:47:32.942634  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:32.942656  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:32.942662  572647 cri.go:89] found id: ""
	I1124 13:47:32.942672  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:47:32.942735  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:32.947427  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:32.951442  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:47:32.951519  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:47:32.982583  572647 cri.go:89] found id: ""
	I1124 13:47:32.982614  572647 logs.go:282] 0 containers: []
	W1124 13:47:32.982626  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:47:32.982635  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:47:32.982706  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:47:33.013412  572647 cri.go:89] found id: "daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:33.013432  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:33.013435  572647 cri.go:89] found id: ""
	I1124 13:47:33.013444  572647 logs.go:282] 2 containers: [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:47:33.013492  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:33.017848  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:33.021955  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:47:33.022038  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:47:33.055691  572647 cri.go:89] found id: ""
	I1124 13:47:33.055722  572647 logs.go:282] 0 containers: []
	W1124 13:47:33.055733  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:47:33.055743  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:47:33.055822  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:47:33.086844  572647 cri.go:89] found id: ""
	I1124 13:47:33.086868  572647 logs.go:282] 0 containers: []
	W1124 13:47:33.086877  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:47:33.086887  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:47:33.086904  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:33.140737  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:47:33.140775  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:33.185221  572647 logs.go:123] Gathering logs for kube-controller-manager [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf] ...
	I1124 13:47:33.185259  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:33.218642  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:47:33.218669  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:47:33.251506  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:47:33.251634  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:47:33.346627  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:47:33.346672  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:47:33.363530  572647 logs.go:123] Gathering logs for kube-apiserver [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8] ...
	I1124 13:47:33.363571  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:33.400997  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:47:33.401042  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:33.446051  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:47:33.446088  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:33.484418  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:47:33.484465  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:47:33.537056  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:47:33.537093  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:47:33.611727  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:47:33.611762  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:47:33.611778  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:36.150015  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:47:36.150435  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:47:36.150499  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:47:36.150559  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:47:36.181496  572647 cri.go:89] found id: "6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:36.181524  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:36.181530  572647 cri.go:89] found id: ""
	I1124 13:47:36.181541  572647 logs.go:282] 2 containers: [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:47:36.181626  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.186587  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.190995  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:47:36.191076  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:47:35.288531  608917 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 13:47:35.288826  608917 start.go:159] libmachine.API.Create for "no-preload-608395" (driver="docker")
	I1124 13:47:35.288879  608917 client.go:173] LocalClient.Create starting
	I1124 13:47:35.288981  608917 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem
	I1124 13:47:35.289027  608917 main.go:143] libmachine: Decoding PEM data...
	I1124 13:47:35.289053  608917 main.go:143] libmachine: Parsing certificate...
	I1124 13:47:35.289129  608917 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem
	I1124 13:47:35.289159  608917 main.go:143] libmachine: Decoding PEM data...
	I1124 13:47:35.289172  608917 main.go:143] libmachine: Parsing certificate...
	I1124 13:47:35.289667  608917 cli_runner.go:164] Run: docker network inspect no-preload-608395 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 13:47:35.309178  608917 cli_runner.go:211] docker network inspect no-preload-608395 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 13:47:35.309257  608917 network_create.go:284] running [docker network inspect no-preload-608395] to gather additional debugging logs...
	I1124 13:47:35.309283  608917 cli_runner.go:164] Run: docker network inspect no-preload-608395
	W1124 13:47:35.328323  608917 cli_runner.go:211] docker network inspect no-preload-608395 returned with exit code 1
	I1124 13:47:35.328350  608917 network_create.go:287] error running [docker network inspect no-preload-608395]: docker network inspect no-preload-608395: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-608395 not found
	I1124 13:47:35.328362  608917 network_create.go:289] output of [docker network inspect no-preload-608395]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-608395 not found
	
	** /stderr **
	I1124 13:47:35.328448  608917 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:47:35.351281  608917 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8afb578efdfa IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:5e:46:43:aa:fe} reservation:<nil>}
	I1124 13:47:35.352105  608917 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ca3a55f53176 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:98:62:4c:91:8f} reservation:<nil>}
	I1124 13:47:35.352583  608917 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e11236ccf9ba IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:36:3b:80:be:95:34} reservation:<nil>}
	I1124 13:47:35.353066  608917 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-35b7bf6fd97a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5a:12:4e:d4:19:26} reservation:<nil>}
	I1124 13:47:35.353566  608917 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-1f5932eecbe7 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:aa:ff:d3:cd:de:0f} reservation:<nil>}
	I1124 13:47:35.354145  608917 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-57f535f2d59b IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:6e:28:a9:1e:8a:96} reservation:<nil>}
	I1124 13:47:35.354775  608917 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d86bc0}
	I1124 13:47:35.354805  608917 network_create.go:124] attempt to create docker network no-preload-608395 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1124 13:47:35.354861  608917 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-608395 no-preload-608395
	I1124 13:47:35.432539  608917 network_create.go:108] docker network no-preload-608395 192.168.103.0/24 created
	I1124 13:47:35.432598  608917 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-608395" container
	I1124 13:47:35.432695  608917 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 13:47:35.453593  608917 cli_runner.go:164] Run: docker volume create no-preload-608395 --label name.minikube.sigs.k8s.io=no-preload-608395 --label created_by.minikube.sigs.k8s.io=true
	I1124 13:47:35.471825  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1124 13:47:35.475329  608917 oci.go:103] Successfully created a docker volume no-preload-608395
	I1124 13:47:35.475418  608917 cli_runner.go:164] Run: docker run --rm --name no-preload-608395-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-608395 --entrypoint /usr/bin/test -v no-preload-608395:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 13:47:35.484374  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1124 13:47:35.522730  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1124 13:47:35.528813  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1124 13:47:35.529239  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1124 13:47:35.541677  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1124 13:47:35.561542  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1124 13:47:35.640840  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1124 13:47:35.640868  608917 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 380.250244ms
	I1124 13:47:35.640883  608917 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 13:47:35.985260  608917 oci.go:107] Successfully prepared a docker volume no-preload-608395
	I1124 13:47:35.985319  608917 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	W1124 13:47:35.985414  608917 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 13:47:35.985453  608917 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 13:47:35.985506  608917 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:47:36.047047  608917 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-608395 --name no-preload-608395 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-608395 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-608395 --network no-preload-608395 --ip 192.168.103.2 --volume no-preload-608395:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:47:36.258467  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1124 13:47:36.258503  608917 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 997.955969ms
	I1124 13:47:36.258519  608917 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1124 13:47:36.410125  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Running}}
	I1124 13:47:36.432289  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:47:36.453312  608917 cli_runner.go:164] Run: docker exec no-preload-608395 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:47:36.504193  608917 oci.go:144] the created container "no-preload-608395" has a running status.
	I1124 13:47:36.504226  608917 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa...
	I1124 13:47:36.604837  608917 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:47:36.631267  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:47:36.655799  608917 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:47:36.655830  608917 kic_runner.go:114] Args: [docker exec --privileged no-preload-608395 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:47:36.705661  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:47:36.729778  608917 machine.go:94] provisionDockerMachine start ...
	I1124 13:47:36.729884  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:36.756901  608917 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:36.757367  608917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1124 13:47:36.757380  608917 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:47:36.758446  608917 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 13:47:37.510037  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1124 13:47:37.510068  608917 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 2.249448579s
	I1124 13:47:37.510081  608917 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1124 13:47:37.572176  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1124 13:47:37.572211  608917 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 2.31168357s
	I1124 13:47:37.572229  608917 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1124 13:47:37.595833  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1124 13:47:37.595868  608917 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 2.335217312s
	I1124 13:47:37.595886  608917 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1124 13:47:37.719899  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1124 13:47:37.719956  608917 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 2.45935214s
	I1124 13:47:37.719969  608917 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1124 13:47:38.059972  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1124 13:47:38.060022  608917 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.799433794s
	I1124 13:47:38.060036  608917 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1124 13:47:38.060055  608917 cache.go:87] Successfully saved all images to host disk.
	I1124 13:47:39.915534  608917 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-608395
	
	I1124 13:47:39.915567  608917 ubuntu.go:182] provisioning hostname "no-preload-608395"
	I1124 13:47:39.915651  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:39.936421  608917 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:39.936658  608917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1124 13:47:39.936672  608917 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-608395 && echo "no-preload-608395" | sudo tee /etc/hostname
	I1124 13:47:35.415632  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Running}}
	I1124 13:47:35.436407  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:47:35.457824  607669 cli_runner.go:164] Run: docker exec old-k8s-version-513442 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:47:35.505936  607669 oci.go:144] the created container "old-k8s-version-513442" has a running status.
	I1124 13:47:35.505993  607669 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa...
	I1124 13:47:35.536159  607669 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:47:35.565751  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:47:35.587350  607669 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:47:35.587376  607669 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-513442 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:47:35.639485  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:47:35.659275  607669 machine.go:94] provisionDockerMachine start ...
	I1124 13:47:35.659377  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:35.682791  607669 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:35.683193  607669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1124 13:47:35.683215  607669 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:47:35.683887  607669 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57402->127.0.0.1:33435: read: connection reset by peer
	I1124 13:47:38.829345  607669 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-513442
	
	I1124 13:47:38.829376  607669 ubuntu.go:182] provisioning hostname "old-k8s-version-513442"
	I1124 13:47:38.829451  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:38.847276  607669 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:38.847521  607669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1124 13:47:38.847540  607669 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-513442 && echo "old-k8s-version-513442" | sudo tee /etc/hostname
	I1124 13:47:39.005190  607669 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-513442
	
	I1124 13:47:39.005277  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.023623  607669 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:39.023848  607669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1124 13:47:39.023866  607669 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-513442' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-513442/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-513442' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:47:39.170228  607669 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:47:39.170266  607669 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-370498/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-370498/.minikube}
	I1124 13:47:39.170286  607669 ubuntu.go:190] setting up certificates
	I1124 13:47:39.170295  607669 provision.go:84] configureAuth start
	I1124 13:47:39.170348  607669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-513442
	I1124 13:47:39.189446  607669 provision.go:143] copyHostCerts
	I1124 13:47:39.189521  607669 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem, removing ...
	I1124 13:47:39.189536  607669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem
	I1124 13:47:39.189619  607669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem (1082 bytes)
	I1124 13:47:39.189751  607669 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem, removing ...
	I1124 13:47:39.189764  607669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem
	I1124 13:47:39.189810  607669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem (1123 bytes)
	I1124 13:47:39.189989  607669 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem, removing ...
	I1124 13:47:39.190006  607669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem
	I1124 13:47:39.190054  607669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem (1675 bytes)
	I1124 13:47:39.190154  607669 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-513442 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-513442]
	I1124 13:47:39.227079  607669 provision.go:177] copyRemoteCerts
	I1124 13:47:39.227139  607669 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:47:39.227177  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.244951  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:47:39.349311  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 13:47:39.371319  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 13:47:39.391311  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 13:47:39.411071  607669 provision.go:87] duration metric: took 240.75737ms to configureAuth
	I1124 13:47:39.411102  607669 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:47:39.411303  607669 config.go:182] Loaded profile config "old-k8s-version-513442": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 13:47:39.411317  607669 machine.go:97] duration metric: took 3.752022568s to provisionDockerMachine
	I1124 13:47:39.411325  607669 client.go:176] duration metric: took 8.852661553s to LocalClient.Create
	I1124 13:47:39.411358  607669 start.go:167] duration metric: took 8.852720089s to libmachine.API.Create "old-k8s-version-513442"
	I1124 13:47:39.411372  607669 start.go:293] postStartSetup for "old-k8s-version-513442" (driver="docker")
	I1124 13:47:39.411388  607669 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:47:39.411452  607669 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:47:39.411508  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.429085  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:47:39.536320  607669 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:47:39.540367  607669 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:47:39.540402  607669 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:47:39.540414  607669 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/addons for local assets ...
	I1124 13:47:39.540470  607669 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/files for local assets ...
	I1124 13:47:39.540543  607669 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem -> 3741222.pem in /etc/ssl/certs
	I1124 13:47:39.540631  607669 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 13:47:39.549275  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:47:39.573695  607669 start.go:296] duration metric: took 162.301306ms for postStartSetup
	I1124 13:47:39.574191  607669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-513442
	I1124 13:47:39.593438  607669 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/config.json ...
	I1124 13:47:39.593801  607669 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:47:39.593897  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.615008  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:47:39.717288  607669 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:47:39.722340  607669 start.go:128] duration metric: took 9.166080327s to createHost
	I1124 13:47:39.722370  607669 start.go:83] releasing machines lock for "old-k8s-version-513442", held for 9.166275546s
	I1124 13:47:39.722447  607669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-513442
	I1124 13:47:39.743680  607669 ssh_runner.go:195] Run: cat /version.json
	I1124 13:47:39.743731  607669 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:47:39.743745  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.743812  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.763336  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:47:39.763737  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:47:39.929805  607669 ssh_runner.go:195] Run: systemctl --version
	I1124 13:47:39.938447  607669 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:47:39.944068  607669 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:47:39.944147  607669 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:47:39.974609  607669 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 13:47:39.974641  607669 start.go:496] detecting cgroup driver to use...
	I1124 13:47:39.974679  607669 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 13:47:39.974728  607669 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 13:47:39.990824  607669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 13:47:40.004856  607669 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:47:40.004920  607669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:47:40.024248  607669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:47:40.044433  607669 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:47:40.145638  607669 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:47:40.247759  607669 docker.go:234] disabling docker service ...
	I1124 13:47:40.247829  607669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:47:40.269922  607669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:47:40.284840  607669 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:47:40.379978  607669 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:47:40.471616  607669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:47:40.485207  607669 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:47:40.501980  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1124 13:47:40.513545  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 13:47:40.524134  607669 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 13:47:40.524215  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 13:47:40.533927  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:47:40.543474  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 13:47:40.553177  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:47:40.563129  607669 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:47:40.572813  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 13:47:40.583799  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 13:47:40.593872  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 13:47:40.604166  607669 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:47:40.612262  607669 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:47:40.620472  607669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:47:40.706065  607669 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 13:47:40.809269  607669 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 13:47:40.809335  607669 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 13:47:40.814110  607669 start.go:564] Will wait 60s for crictl version
	I1124 13:47:40.814187  607669 ssh_runner.go:195] Run: which crictl
	I1124 13:47:40.818745  607669 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:47:40.843808  607669 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 13:47:40.843877  607669 ssh_runner.go:195] Run: containerd --version
	I1124 13:47:40.865477  607669 ssh_runner.go:195] Run: containerd --version
	I1124 13:47:40.893673  607669 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1124 13:47:36.234464  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:36.234492  572647 cri.go:89] found id: ""
	I1124 13:47:36.234504  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:47:36.234584  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.240249  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:47:36.240335  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:47:36.279967  572647 cri.go:89] found id: ""
	I1124 13:47:36.279998  572647 logs.go:282] 0 containers: []
	W1124 13:47:36.280009  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:47:36.280027  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:47:36.280082  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:47:36.313257  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:36.313286  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:36.313292  572647 cri.go:89] found id: ""
	I1124 13:47:36.313302  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:47:36.313364  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.317818  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.322103  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:47:36.322170  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:47:36.352450  572647 cri.go:89] found id: ""
	I1124 13:47:36.352485  572647 logs.go:282] 0 containers: []
	W1124 13:47:36.352497  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:47:36.352506  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:47:36.352569  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:47:36.381849  572647 cri.go:89] found id: "daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:36.381876  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:36.381881  572647 cri.go:89] found id: ""
	I1124 13:47:36.381896  572647 logs.go:282] 2 containers: [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:47:36.381995  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.386540  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.391244  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:47:36.391326  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:47:36.425813  572647 cri.go:89] found id: ""
	I1124 13:47:36.425845  572647 logs.go:282] 0 containers: []
	W1124 13:47:36.425856  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:47:36.425864  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:47:36.425945  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:47:36.461097  572647 cri.go:89] found id: ""
	I1124 13:47:36.461127  572647 logs.go:282] 0 containers: []
	W1124 13:47:36.461139  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:47:36.461153  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:47:36.461172  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:36.499983  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:47:36.500029  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:47:36.521192  572647 logs.go:123] Gathering logs for kube-apiserver [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8] ...
	I1124 13:47:36.521223  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:36.557807  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:47:36.557859  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:47:36.611092  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:47:36.611122  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:47:36.647506  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:47:36.647538  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:47:36.773107  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:47:36.773142  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:47:36.847612  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:47:36.847637  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:47:36.847662  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:36.887116  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:47:36.887154  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:36.924700  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:47:36.924746  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:36.974655  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:47:36.974689  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:37.017086  572647 logs.go:123] Gathering logs for kube-controller-manager [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf] ...
	I1124 13:47:37.017118  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:39.548013  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:47:39.548547  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:47:39.548616  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:47:39.548676  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:47:39.577831  572647 cri.go:89] found id: "6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:39.577852  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:39.577857  572647 cri.go:89] found id: ""
	I1124 13:47:39.577867  572647 logs.go:282] 2 containers: [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:47:39.577947  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.582354  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.586625  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:47:39.586710  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:47:39.614522  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:39.614543  572647 cri.go:89] found id: ""
	I1124 13:47:39.614552  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:47:39.614607  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.619054  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:47:39.619127  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:47:39.646326  572647 cri.go:89] found id: ""
	I1124 13:47:39.646352  572647 logs.go:282] 0 containers: []
	W1124 13:47:39.646363  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:47:39.646370  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:47:39.646429  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:47:39.672725  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:39.672745  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:39.672749  572647 cri.go:89] found id: ""
	I1124 13:47:39.672757  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:47:39.672814  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.677191  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.681175  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:47:39.681258  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:47:39.708431  572647 cri.go:89] found id: ""
	I1124 13:47:39.708455  572647 logs.go:282] 0 containers: []
	W1124 13:47:39.708464  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:47:39.708470  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:47:39.708519  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:47:39.740642  572647 cri.go:89] found id: "daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:39.740666  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:39.740672  572647 cri.go:89] found id: ""
	I1124 13:47:39.740682  572647 logs.go:282] 2 containers: [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:47:39.740749  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.745558  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.749963  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:47:39.750090  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:47:39.785165  572647 cri.go:89] found id: ""
	I1124 13:47:39.785200  572647 logs.go:282] 0 containers: []
	W1124 13:47:39.785213  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:47:39.785223  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:47:39.785297  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:47:39.816314  572647 cri.go:89] found id: ""
	I1124 13:47:39.816344  572647 logs.go:282] 0 containers: []
	W1124 13:47:39.816356  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:47:39.816369  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:47:39.816386  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:39.855047  572647 logs.go:123] Gathering logs for kube-controller-manager [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf] ...
	I1124 13:47:39.855082  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:39.884850  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:47:39.884886  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:39.923160  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:47:39.923209  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:47:40.011551  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:47:40.011587  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:47:40.028754  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:47:40.028784  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:47:40.073406  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:47:40.073463  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:47:40.118088  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:47:40.118130  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:47:40.186938  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:47:40.186963  572647 logs.go:123] Gathering logs for kube-apiserver [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8] ...
	I1124 13:47:40.186979  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:40.225544  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:47:40.225575  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:40.264167  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:47:40.264212  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:40.310248  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:47:40.310285  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:40.101111  608917 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-608395
	
	I1124 13:47:40.101196  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.122644  608917 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:40.122921  608917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1124 13:47:40.122949  608917 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-608395' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-608395/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-608395' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:47:40.280196  608917 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:47:40.280226  608917 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-370498/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-370498/.minikube}
	I1124 13:47:40.280268  608917 ubuntu.go:190] setting up certificates
	I1124 13:47:40.280293  608917 provision.go:84] configureAuth start
	I1124 13:47:40.280380  608917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-608395
	I1124 13:47:40.303469  608917 provision.go:143] copyHostCerts
	I1124 13:47:40.303532  608917 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem, removing ...
	I1124 13:47:40.303543  608917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem
	I1124 13:47:40.303590  608917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem (1082 bytes)
	I1124 13:47:40.303726  608917 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem, removing ...
	I1124 13:47:40.303739  608917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem
	I1124 13:47:40.303772  608917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem (1123 bytes)
	I1124 13:47:40.303856  608917 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem, removing ...
	I1124 13:47:40.303868  608917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem
	I1124 13:47:40.303892  608917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem (1675 bytes)
	I1124 13:47:40.303983  608917 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem org=jenkins.no-preload-608395 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-608395]
	I1124 13:47:40.375070  608917 provision.go:177] copyRemoteCerts
	I1124 13:47:40.375131  608917 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:47:40.375180  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.394610  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:47:40.501959  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 13:47:40.523137  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 13:47:40.542279  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 13:47:40.562226  608917 provision.go:87] duration metric: took 281.905194ms to configureAuth
	I1124 13:47:40.562265  608917 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:47:40.562572  608917 config.go:182] Loaded profile config "no-preload-608395": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:47:40.562595  608917 machine.go:97] duration metric: took 3.832793094s to provisionDockerMachine
	I1124 13:47:40.562604  608917 client.go:176] duration metric: took 5.273718281s to LocalClient.Create
	I1124 13:47:40.562649  608917 start.go:167] duration metric: took 5.273809151s to libmachine.API.Create "no-preload-608395"
	I1124 13:47:40.562659  608917 start.go:293] postStartSetup for "no-preload-608395" (driver="docker")
	I1124 13:47:40.562671  608917 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:47:40.562721  608917 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:47:40.562769  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.582715  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:47:40.688873  608917 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:47:40.692683  608917 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:47:40.692717  608917 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:47:40.692818  608917 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/addons for local assets ...
	I1124 13:47:40.692947  608917 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/files for local assets ...
	I1124 13:47:40.693078  608917 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem -> 3741222.pem in /etc/ssl/certs
	I1124 13:47:40.693208  608917 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 13:47:40.702139  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:47:40.725883  608917 start.go:296] duration metric: took 163.205649ms for postStartSetup
	I1124 13:47:40.726376  608917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-608395
	I1124 13:47:40.744526  608917 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/config.json ...
	I1124 13:47:40.745022  608917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:47:40.745098  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.763260  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:47:40.869180  608917 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:47:40.874423  608917 start.go:128] duration metric: took 5.58807074s to createHost
	I1124 13:47:40.874458  608917 start.go:83] releasing machines lock for "no-preload-608395", held for 5.58825096s
	I1124 13:47:40.874540  608917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-608395
	I1124 13:47:40.896709  608917 ssh_runner.go:195] Run: cat /version.json
	I1124 13:47:40.896763  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.896807  608917 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:47:40.896904  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.918859  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:47:40.920576  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:47:41.084454  608917 ssh_runner.go:195] Run: systemctl --version
	I1124 13:47:41.091582  608917 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:47:41.097406  608917 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:47:41.097478  608917 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:47:41.125540  608917 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 13:47:41.125566  608917 start.go:496] detecting cgroup driver to use...
	I1124 13:47:41.125601  608917 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 13:47:41.125650  608917 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 13:47:41.148294  608917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 13:47:41.167664  608917 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:47:41.167740  608917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:47:41.189235  608917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:47:41.213594  608917 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:47:41.336134  608917 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:47:41.426955  608917 docker.go:234] disabling docker service ...
	I1124 13:47:41.427023  608917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:47:41.448189  608917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:47:41.462073  608917 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:47:41.548298  608917 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:47:41.635202  608917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:47:41.649149  608917 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:47:41.664451  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 13:47:41.676460  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 13:47:41.686131  608917 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 13:47:41.686199  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 13:47:41.695720  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:47:41.705503  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 13:47:41.714879  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:47:41.724369  608917 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:47:41.733131  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 13:47:41.742525  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 13:47:41.751826  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 13:47:41.762473  608917 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:47:41.770755  608917 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:47:41.779154  608917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:47:41.869150  608917 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 13:47:41.957807  608917 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 13:47:41.957876  608917 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 13:47:41.965431  608917 start.go:564] Will wait 60s for crictl version
	I1124 13:47:41.965500  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:41.970973  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:47:42.001317  608917 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 13:47:42.001405  608917 ssh_runner.go:195] Run: containerd --version
	I1124 13:47:42.026320  608917 ssh_runner.go:195] Run: containerd --version
	I1124 13:47:42.052318  608917 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 13:47:40.896022  607669 cli_runner.go:164] Run: docker network inspect old-k8s-version-513442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:47:40.918522  607669 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 13:47:40.923315  607669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:47:40.935781  607669 kubeadm.go:884] updating cluster {Name:old-k8s-version-513442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-513442 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:47:40.935932  607669 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 13:47:40.935998  607669 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:47:40.965650  607669 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 13:47:40.965689  607669 containerd.go:534] Images already preloaded, skipping extraction
	I1124 13:47:40.965773  607669 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:47:40.999412  607669 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 13:47:40.999441  607669 cache_images.go:86] Images are preloaded, skipping loading
	I1124 13:47:40.999451  607669 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 containerd true true} ...
	I1124 13:47:40.999568  607669 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-513442 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-513442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:47:40.999640  607669 ssh_runner.go:195] Run: sudo crictl info
	I1124 13:47:41.030216  607669 cni.go:84] Creating CNI manager for ""
	I1124 13:47:41.030250  607669 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:47:41.030273  607669 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:47:41.030304  607669 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-513442 NodeName:old-k8s-version-513442 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:47:41.030479  607669 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-513442"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:47:41.030593  607669 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 13:47:41.040496  607669 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 13:47:41.040574  607669 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:47:41.048965  607669 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1124 13:47:41.063246  607669 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:47:41.080199  607669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I1124 13:47:41.095141  607669 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:47:41.099735  607669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:47:41.111816  607669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:47:41.205774  607669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:47:41.229647  607669 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442 for IP: 192.168.94.2
	I1124 13:47:41.229678  607669 certs.go:195] generating shared ca certs ...
	I1124 13:47:41.229702  607669 certs.go:227] acquiring lock for ca certs: {Name:mk5874497fda855b1e2ff816147ffdfbc44946ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.229867  607669 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.key
	I1124 13:47:41.229906  607669 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.key
	I1124 13:47:41.229935  607669 certs.go:257] generating profile certs ...
	I1124 13:47:41.230010  607669 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.key
	I1124 13:47:41.230025  607669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.crt with IP's: []
	I1124 13:47:41.438692  607669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.crt ...
	I1124 13:47:41.438735  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.crt: {Name:mkbb44e092f1569b20ffeeea6d19871e0c7ea39c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.438903  607669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.key ...
	I1124 13:47:41.438942  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.key: {Name:mkcdbea7ce1dc4681fc91bbc4b78d2c028c94687 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.439100  607669 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key.eabc0cb4
	I1124 13:47:41.439127  607669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt.eabc0cb4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1124 13:47:41.518895  607669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt.eabc0cb4 ...
	I1124 13:47:41.518941  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt.eabc0cb4: {Name:mk47b90333d21f736ed33504f6da28b133242551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.519134  607669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key.eabc0cb4 ...
	I1124 13:47:41.519153  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key.eabc0cb4: {Name:mk4592466df77ceb7a68fa27e5f9a0201b1a8063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.519239  607669 certs.go:382] copying /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt.eabc0cb4 -> /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt
	I1124 13:47:41.519312  607669 certs.go:386] copying /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key.eabc0cb4 -> /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key
	I1124 13:47:41.519368  607669 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.key
	I1124 13:47:41.519388  607669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.crt with IP's: []
	I1124 13:47:41.757186  607669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.crt ...
	I1124 13:47:41.757217  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.crt: {Name:mkb434108adbee544176aebf04c9ed8a63b76175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.757418  607669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.key ...
	I1124 13:47:41.757442  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.key: {Name:mk640e3789cee888121bd6cc947590ae24e90dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.757683  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122.pem (1338 bytes)
	W1124 13:47:41.757725  607669 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122_empty.pem, impossibly tiny 0 bytes
	I1124 13:47:41.757736  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 13:47:41.757777  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem (1082 bytes)
	I1124 13:47:41.757814  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:47:41.757849  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem (1675 bytes)
	I1124 13:47:41.757940  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:47:41.758610  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:47:41.778634  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 13:47:41.799349  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:47:41.825279  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 13:47:41.844900  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 13:47:41.865036  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 13:47:41.887428  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:47:41.912645  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 13:47:41.937284  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:47:41.966303  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122.pem --> /usr/share/ca-certificates/374122.pem (1338 bytes)
	I1124 13:47:41.989056  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /usr/share/ca-certificates/3741222.pem (1708 bytes)
	I1124 13:47:42.011989  607669 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:47:42.027976  607669 ssh_runner.go:195] Run: openssl version
	I1124 13:47:42.036340  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3741222.pem && ln -fs /usr/share/ca-certificates/3741222.pem /etc/ssl/certs/3741222.pem"
	I1124 13:47:42.046698  607669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3741222.pem
	I1124 13:47:42.051406  607669 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:20 /usr/share/ca-certificates/3741222.pem
	I1124 13:47:42.051481  607669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3741222.pem
	I1124 13:47:42.089903  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3741222.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 13:47:42.100357  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:47:42.110986  607669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:42.115955  607669 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:42.116031  607669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:42.153310  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:47:42.163209  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/374122.pem && ln -fs /usr/share/ca-certificates/374122.pem /etc/ssl/certs/374122.pem"
	I1124 13:47:42.173625  607669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/374122.pem
	I1124 13:47:42.178229  607669 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:20 /usr/share/ca-certificates/374122.pem
	I1124 13:47:42.178308  607669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/374122.pem
	I1124 13:47:42.216281  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/374122.pem /etc/ssl/certs/51391683.0"
	I1124 13:47:42.228415  607669 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:47:42.232854  607669 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:47:42.232959  607669 kubeadm.go:401] StartCluster: {Name:old-k8s-version-513442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-513442 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:47:42.233058  607669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 13:47:42.233119  607669 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:47:42.262130  607669 cri.go:89] found id: ""
	I1124 13:47:42.262225  607669 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:47:42.271622  607669 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:47:42.280568  607669 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 13:47:42.280637  607669 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:47:42.289222  607669 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:47:42.289241  607669 kubeadm.go:158] found existing configuration files:
	
	I1124 13:47:42.289287  607669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:47:42.297481  607669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:47:42.297560  607669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:47:42.306305  607669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:47:42.315150  607669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:47:42.315224  607669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:47:42.324595  607669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:47:42.333840  607669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:47:42.333922  607669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:47:42.344021  607669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:47:42.355171  607669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:47:42.355226  607669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:47:42.364345  607669 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 13:47:42.433190  607669 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1124 13:47:42.433270  607669 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:47:42.487608  607669 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 13:47:42.487695  607669 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 13:47:42.487758  607669 kubeadm.go:319] OS: Linux
	I1124 13:47:42.487823  607669 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 13:47:42.487892  607669 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 13:47:42.487986  607669 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 13:47:42.488057  607669 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 13:47:42.488125  607669 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 13:47:42.488216  607669 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 13:47:42.488285  607669 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 13:47:42.488352  607669 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 13:47:42.585565  607669 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:47:42.585750  607669 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:47:42.585896  607669 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1124 13:47:42.762435  607669 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:47:42.054673  608917 cli_runner.go:164] Run: docker network inspect no-preload-608395 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:47:42.073094  608917 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1124 13:47:42.078208  608917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:47:42.089858  608917 kubeadm.go:884] updating cluster {Name:no-preload-608395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-608395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:47:42.090126  608917 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 13:47:42.090181  608917 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:47:42.117576  608917 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1124 13:47:42.117601  608917 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1124 13:47:42.117671  608917 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:42.117683  608917 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.117696  608917 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 13:47:42.117708  608917 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.117683  608917 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.117737  608917 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.117738  608917 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.117773  608917 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.119957  608917 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.120028  608917 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.120041  608917 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.120103  608917 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.120144  608917 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.120206  608917 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 13:47:42.120361  608917 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:42.120651  608917 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.324599  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1124 13:47:42.324658  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.329752  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1124 13:47:42.329811  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1124 13:47:42.340410  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1124 13:47:42.340483  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.345994  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1124 13:47:42.346082  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.350632  608917 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1124 13:47:42.350771  608917 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.350861  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.354889  608917 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1124 13:47:42.355021  608917 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1124 13:47:42.355078  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.365506  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1124 13:47:42.365584  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.370164  608917 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1124 13:47:42.370246  608917 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.370299  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.371573  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.371569  608917 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1124 13:47:42.371633  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 13:47:42.371663  608917 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.371700  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.383984  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115"
	I1124 13:47:42.384064  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.391339  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1124 13:47:42.391424  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.394058  608917 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1124 13:47:42.394107  608917 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.394139  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.394173  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.394139  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.410796  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 13:47:42.412029  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.415223  608917 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1124 13:47:42.415273  608917 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.415318  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.430558  608917 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1124 13:47:42.430610  608917 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.430661  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.432115  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.432240  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.432710  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.449068  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 13:47:42.451309  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.451333  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.451434  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.471426  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.471426  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.472006  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.507575  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1124 13:47:42.507696  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1124 13:47:42.507737  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.507752  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.507776  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1124 13:47:42.507812  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 13:47:42.512031  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1124 13:47:42.512160  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1124 13:47:42.512183  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.512220  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1124 13:47:42.512281  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1124 13:47:42.542249  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1124 13:47:42.542293  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1124 13:47:42.542356  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.542419  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.542436  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1124 13:47:42.542450  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1124 13:47:42.542460  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1124 13:47:42.542482  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1124 13:47:42.542522  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1124 13:47:42.542541  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1124 13:47:42.547506  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1124 13:47:42.547609  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 13:47:42.591222  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1124 13:47:42.591265  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1124 13:47:42.591339  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1124 13:47:42.591358  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1124 13:47:42.630891  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1124 13:47:42.630960  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1124 13:47:42.635881  608917 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1124 13:47:42.635984  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1124 13:47:42.696822  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1124 13:47:42.696868  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1124 13:47:42.696964  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1124 13:47:42.696987  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1124 13:47:42.855586  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1124 13:47:43.017613  608917 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1124 13:47:43.017692  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1124 13:47:43.363331  608917 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1124 13:47:43.363429  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:44.322473  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.304751727s)
	I1124 13:47:44.322506  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1124 13:47:44.322534  608917 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1124 13:47:44.322535  608917 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1124 13:47:44.322572  608917 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:44.322581  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1124 13:47:44.322611  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:44.327186  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:42.765072  607669 out.go:252]   - Generating certificates and keys ...
	I1124 13:47:42.765189  607669 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 13:47:42.765429  607669 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 13:47:42.918631  607669 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 13:47:43.145530  607669 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 13:47:43.262863  607669 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 13:47:43.516853  607669 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 13:47:43.680193  607669 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 13:47:43.680382  607669 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-513442] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 13:47:43.927450  607669 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 13:47:43.927668  607669 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-513442] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 13:47:44.210866  607669 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 13:47:44.444469  607669 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 13:47:44.571652  607669 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 13:47:44.571791  607669 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 13:47:44.658495  607669 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 13:47:44.899827  607669 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 13:47:45.259836  607669 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 13:47:45.407067  607669 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 13:47:45.407645  607669 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 13:47:45.412109  607669 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 13:47:42.868629  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:47:45.407011  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.084400483s)
	I1124 13:47:45.407048  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1124 13:47:45.407074  608917 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1124 13:47:45.407121  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1124 13:47:45.407011  608917 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.079785919s)
	I1124 13:47:45.407225  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:46.754417  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.347254819s)
	I1124 13:47:46.754464  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1124 13:47:46.754487  608917 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 13:47:46.754539  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 13:47:46.754423  608917 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.34716741s)
	I1124 13:47:46.754625  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:46.791381  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1124 13:47:46.791500  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1124 13:47:48.250258  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.49567347s)
	I1124 13:47:48.250293  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1124 13:47:48.250322  608917 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 13:47:48.250369  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 13:47:48.250393  608917 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.458859359s)
	I1124 13:47:48.250436  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1124 13:47:48.250458  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1124 13:47:49.525346  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.274952475s)
	I1124 13:47:49.525372  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1124 13:47:49.525397  608917 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1124 13:47:49.525432  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1124 13:47:45.413783  607669 out.go:252]   - Booting up control plane ...
	I1124 13:47:45.414000  607669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 13:47:45.414122  607669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 13:47:45.415606  607669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 13:47:45.433197  607669 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 13:47:45.434777  607669 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 13:47:45.434850  607669 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 13:47:45.555124  607669 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1124 13:47:47.870054  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 13:47:47.870131  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:47:47.870207  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:47:47.909612  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:47:47.909637  572647 cri.go:89] found id: "6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:47.909644  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:47.909649  572647 cri.go:89] found id: ""
	I1124 13:47:47.909660  572647 logs.go:282] 3 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:47:47.909721  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:47.915163  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:47.920826  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:47.926251  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:47:47.926326  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:47:47.968362  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:47.968399  572647 cri.go:89] found id: ""
	I1124 13:47:47.968412  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:47:47.968487  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:47.973840  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:47:47.973955  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:47:48.011120  572647 cri.go:89] found id: ""
	I1124 13:47:48.011151  572647 logs.go:282] 0 containers: []
	W1124 13:47:48.011163  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:47:48.011172  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:47:48.011242  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:47:48.049409  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:48.049433  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:48.049439  572647 cri.go:89] found id: ""
	I1124 13:47:48.049449  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:47:48.049612  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:48.055041  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:48.061717  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:47:48.061795  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:47:48.098008  572647 cri.go:89] found id: ""
	I1124 13:47:48.098036  572647 logs.go:282] 0 containers: []
	W1124 13:47:48.098048  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:47:48.098056  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:47:48.098116  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:47:48.134832  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:47:48.134858  572647 cri.go:89] found id: "daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:48.134864  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:48.134868  572647 cri.go:89] found id: ""
	I1124 13:47:48.134879  572647 logs.go:282] 3 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:47:48.134960  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:48.140512  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:48.146067  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:48.151167  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:47:48.151293  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:47:48.194241  572647 cri.go:89] found id: ""
	I1124 13:47:48.194275  572647 logs.go:282] 0 containers: []
	W1124 13:47:48.194287  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:47:48.194297  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:47:48.194366  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:47:48.235586  572647 cri.go:89] found id: ""
	I1124 13:47:48.235617  572647 logs.go:282] 0 containers: []
	W1124 13:47:48.235629  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:47:48.235644  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:47:48.235660  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:48.322131  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:47:48.322175  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:47:48.358925  572647 logs.go:123] Gathering logs for kube-controller-manager [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf] ...
	I1124 13:47:48.358964  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:48.399403  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:47:48.399439  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:47:48.442576  572647 logs.go:123] Gathering logs for kube-apiserver [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8] ...
	I1124 13:47:48.442621  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:48.490297  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:47:48.490336  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:48.543239  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:47:48.543277  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:48.591561  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:47:48.591604  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:48.639975  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:47:48.640012  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:47:48.703335  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:47:48.703393  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:47:48.760778  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:47:48.760820  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:47:48.887283  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:47:48.887328  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:47:48.915138  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:47:48.915177  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1124 13:47:50.557442  607669 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.002632 seconds
	I1124 13:47:50.557627  607669 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 13:47:50.572390  607669 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 13:47:51.098533  607669 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 13:47:51.098764  607669 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-513442 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 13:47:51.610053  607669 kubeadm.go:319] [bootstrap-token] Using token: eki30b.4i7191y9601t9kqb
	I1124 13:47:51.611988  607669 out.go:252]   - Configuring RBAC rules ...
	I1124 13:47:51.612142  607669 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 13:47:51.618056  607669 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 13:47:51.627751  607669 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 13:47:51.631902  607669 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 13:47:51.635666  607669 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 13:47:51.643042  607669 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 13:47:51.655046  607669 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 13:47:51.879254  607669 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 13:47:52.022857  607669 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 13:47:52.024273  607669 kubeadm.go:319] 
	I1124 13:47:52.024439  607669 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 13:47:52.024451  607669 kubeadm.go:319] 
	I1124 13:47:52.024565  607669 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 13:47:52.024593  607669 kubeadm.go:319] 
	I1124 13:47:52.024628  607669 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 13:47:52.024712  607669 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 13:47:52.024786  607669 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 13:47:52.024795  607669 kubeadm.go:319] 
	I1124 13:47:52.024870  607669 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 13:47:52.024880  607669 kubeadm.go:319] 
	I1124 13:47:52.024984  607669 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 13:47:52.024995  607669 kubeadm.go:319] 
	I1124 13:47:52.025066  607669 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 13:47:52.025175  607669 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 13:47:52.025273  607669 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 13:47:52.025282  607669 kubeadm.go:319] 
	I1124 13:47:52.025399  607669 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 13:47:52.025508  607669 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 13:47:52.025517  607669 kubeadm.go:319] 
	I1124 13:47:52.025633  607669 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token eki30b.4i7191y9601t9kqb \
	I1124 13:47:52.025782  607669 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:32fb1839a00503b33822b75b81c2f42d5061d18404c0a5cd12189dec7e20658c \
	I1124 13:47:52.025814  607669 kubeadm.go:319] 	--control-plane 
	I1124 13:47:52.025823  607669 kubeadm.go:319] 
	I1124 13:47:52.025955  607669 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 13:47:52.025964  607669 kubeadm.go:319] 
	I1124 13:47:52.026081  607669 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token eki30b.4i7191y9601t9kqb \
	I1124 13:47:52.026226  607669 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:32fb1839a00503b33822b75b81c2f42d5061d18404c0a5cd12189dec7e20658c 
	I1124 13:47:52.029215  607669 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 13:47:52.029395  607669 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 13:47:52.029436  607669 cni.go:84] Creating CNI manager for ""
	I1124 13:47:52.029450  607669 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:47:52.032075  607669 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 13:47:52.378094  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (2.852631537s)
	I1124 13:47:52.378131  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1124 13:47:52.378164  608917 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 13:47:52.378216  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1124 13:47:52.826755  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 13:47:52.826808  608917 cache_images.go:125] Successfully loaded all cached images
	I1124 13:47:52.826816  608917 cache_images.go:94] duration metric: took 10.70919772s to LoadCachedImages
	I1124 13:47:52.826831  608917 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 containerd true true} ...
	I1124 13:47:52.826984  608917 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-608395 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-608395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:47:52.827057  608917 ssh_runner.go:195] Run: sudo crictl info
	I1124 13:47:52.858503  608917 cni.go:84] Creating CNI manager for ""
	I1124 13:47:52.858531  608917 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:47:52.858557  608917 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:47:52.858588  608917 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-608395 NodeName:no-preload-608395 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:47:52.858757  608917 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-608395"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:47:52.858835  608917 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 13:47:52.869416  608917 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1124 13:47:52.869483  608917 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1124 13:47:52.881260  608917 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1124 13:47:52.881274  608917 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1124 13:47:52.881284  608917 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1124 13:47:52.881370  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1124 13:47:52.886648  608917 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1124 13:47:52.886683  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1124 13:47:53.829310  608917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:47:53.844364  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1124 13:47:53.848663  608917 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1124 13:47:53.848703  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1124 13:47:54.078871  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1124 13:47:54.083904  608917 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1124 13:47:54.083971  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1124 13:47:54.263727  608917 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:47:54.272819  608917 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1124 13:47:54.287533  608917 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:47:54.307319  608917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1124 13:47:54.321728  608917 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:47:54.326108  608917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:47:54.337568  608917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:47:54.423252  608917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:47:54.446892  608917 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395 for IP: 192.168.103.2
	I1124 13:47:54.446932  608917 certs.go:195] generating shared ca certs ...
	I1124 13:47:54.446950  608917 certs.go:227] acquiring lock for ca certs: {Name:mk5874497fda855b1e2ff816147ffdfbc44946ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.447115  608917 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.key
	I1124 13:47:54.447173  608917 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.key
	I1124 13:47:54.447189  608917 certs.go:257] generating profile certs ...
	I1124 13:47:54.447250  608917 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.key
	I1124 13:47:54.447265  608917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt with IP's: []
	I1124 13:47:54.480111  608917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt ...
	I1124 13:47:54.480143  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt: {Name:mk0373d89f453529126dca865f8c4273a9b76c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.480318  608917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.key ...
	I1124 13:47:54.480326  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.key: {Name:mkd4fd6c97a850045d4415dcd6682504ca05b6b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.480412  608917 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key.211f6cd0
	I1124 13:47:54.480432  608917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt.211f6cd0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1124 13:47:54.564575  608917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt.211f6cd0 ...
	I1124 13:47:54.564606  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt.211f6cd0: {Name:mk39921501aaa8b9dfdaa0c59584189fbc232834 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.564812  608917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key.211f6cd0 ...
	I1124 13:47:54.564832  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key.211f6cd0: {Name:mk1e5ec23cae444088ab39a7c9f4bd7f0b68695e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.565002  608917 certs.go:382] copying /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt.211f6cd0 -> /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt
	I1124 13:47:54.565092  608917 certs.go:386] copying /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key.211f6cd0 -> /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key
	I1124 13:47:54.565147  608917 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.key
	I1124 13:47:54.565166  608917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.crt with IP's: []
	I1124 13:47:54.682010  608917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.crt ...
	I1124 13:47:54.682042  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.crt: {Name:mk61707e6277a856c1f1cee667479489cd8cfc56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.682251  608917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.key ...
	I1124 13:47:54.682270  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.key: {Name:mkdc07f88aff1f58330c9757ac629acf2062c9ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.682520  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122.pem (1338 bytes)
	W1124 13:47:54.682564  608917 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122_empty.pem, impossibly tiny 0 bytes
	I1124 13:47:54.682574  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 13:47:54.682602  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem (1082 bytes)
	I1124 13:47:54.682626  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:47:54.682651  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem (1675 bytes)
	I1124 13:47:54.682697  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:47:54.683371  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:47:54.703387  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 13:47:54.722770  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:47:54.743107  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 13:47:54.763697  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 13:47:54.783164  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 13:47:54.802752  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:47:54.822653  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 13:47:54.843126  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122.pem --> /usr/share/ca-certificates/374122.pem (1338 bytes)
	I1124 13:47:54.867619  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /usr/share/ca-certificates/3741222.pem (1708 bytes)
	I1124 13:47:54.887814  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:47:54.907876  608917 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:47:54.922379  608917 ssh_runner.go:195] Run: openssl version
	I1124 13:47:54.929636  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/374122.pem && ln -fs /usr/share/ca-certificates/374122.pem /etc/ssl/certs/374122.pem"
	I1124 13:47:54.940237  608917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/374122.pem
	I1124 13:47:54.944856  608917 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:20 /usr/share/ca-certificates/374122.pem
	I1124 13:47:54.944961  608917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/374122.pem
	I1124 13:47:54.983788  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/374122.pem /etc/ssl/certs/51391683.0"
	I1124 13:47:54.994031  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3741222.pem && ln -fs /usr/share/ca-certificates/3741222.pem /etc/ssl/certs/3741222.pem"
	I1124 13:47:55.004849  608917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3741222.pem
	I1124 13:47:55.010168  608917 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:20 /usr/share/ca-certificates/3741222.pem
	I1124 13:47:55.010231  608917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3741222.pem
	I1124 13:47:55.048930  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3741222.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 13:47:55.058618  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:47:55.068496  608917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:52.033462  607669 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 13:47:52.040052  607669 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1124 13:47:52.040080  607669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 13:47:52.058896  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 13:47:52.863538  607669 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 13:47:52.863612  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:52.863691  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-513442 minikube.k8s.io/updated_at=2025_11_24T13_47_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=old-k8s-version-513442 minikube.k8s.io/primary=true
	I1124 13:47:52.876635  607669 ops.go:34] apiserver oom_adj: -16
	I1124 13:47:52.948231  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:53.449196  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:53.948546  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:54.448277  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:54.949098  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:55.073505  608917 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:55.073568  608917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:55.110353  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:47:55.120226  608917 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:47:55.124508  608917 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:47:55.124574  608917 kubeadm.go:401] StartCluster: {Name:no-preload-608395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-608395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:47:55.124676  608917 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 13:47:55.124734  608917 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:47:55.153610  608917 cri.go:89] found id: ""
	I1124 13:47:55.153686  608917 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:47:55.163237  608917 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:47:55.172281  608917 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 13:47:55.172352  608917 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:47:55.181432  608917 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:47:55.181458  608917 kubeadm.go:158] found existing configuration files:
	
	I1124 13:47:55.181515  608917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:47:55.190814  608917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:47:55.190897  608917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:47:55.200577  608917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:47:55.210272  608917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:47:55.210344  608917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:47:55.219990  608917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:47:55.228828  608917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:47:55.228885  608917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:47:55.238104  608917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:47:55.246631  608917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:47:55.246745  608917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:47:55.255509  608917 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 13:47:55.316154  608917 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 13:47:55.376542  608917 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 13:47:55.448626  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:55.949156  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:56.449055  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:56.949140  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:57.448946  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:57.948732  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:58.448437  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:58.948803  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:59.449172  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:59.948946  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:59.001079  572647 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.085873793s)
	W1124 13:47:59.001127  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1124 13:47:59.001145  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:47:59.001163  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:00.448856  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:00.948957  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:01.448664  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:01.948985  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:02.448486  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:02.948890  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:03.448380  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:03.948515  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:04.448564  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:04.527535  607669 kubeadm.go:1114] duration metric: took 11.66399569s to wait for elevateKubeSystemPrivileges
	I1124 13:48:04.527576  607669 kubeadm.go:403] duration metric: took 22.29462596s to StartCluster
	I1124 13:48:04.527612  607669 settings.go:142] acquiring lock: {Name:mka599a3c9bae62ffb84d261186583052ce40f68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:48:04.527702  607669 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:48:04.529054  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/kubeconfig: {Name:mk44e8f04ffd8592063c19ad1e339ad14aaa66a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:48:04.529299  607669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 13:48:04.529306  607669 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 13:48:04.529383  607669 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 13:48:04.529498  607669 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-513442"
	I1124 13:48:04.529517  607669 config.go:182] Loaded profile config "old-k8s-version-513442": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 13:48:04.529519  607669 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-513442"
	I1124 13:48:04.529535  607669 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-513442"
	I1124 13:48:04.529561  607669 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-513442"
	I1124 13:48:04.529641  607669 host.go:66] Checking if "old-k8s-version-513442" exists ...
	I1124 13:48:04.529946  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:48:04.530180  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:48:04.531152  607669 out.go:179] * Verifying Kubernetes components...
	I1124 13:48:04.532717  607669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:48:04.557008  607669 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:48:04.558405  607669 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:48:04.558429  607669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 13:48:04.558495  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:48:04.562314  607669 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-513442"
	I1124 13:48:04.562381  607669 host.go:66] Checking if "old-k8s-version-513442" exists ...
	I1124 13:48:04.563175  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:48:04.584062  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:48:04.598587  607669 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 13:48:04.598613  607669 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 13:48:04.598683  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:48:04.628606  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:48:04.653771  607669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 13:48:04.701037  607669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:48:04.714197  607669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:48:04.765729  607669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 13:48:04.912320  607669 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1124 13:48:04.913621  607669 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-513442" to be "Ready" ...
	I1124 13:48:05.136398  607669 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 13:48:05.160590  608917 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 13:48:05.160664  608917 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:48:05.160771  608917 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 13:48:05.160854  608917 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 13:48:05.160886  608917 kubeadm.go:319] OS: Linux
	I1124 13:48:05.160993  608917 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 13:48:05.161038  608917 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 13:48:05.161128  608917 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 13:48:05.161215  608917 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 13:48:05.161290  608917 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 13:48:05.161348  608917 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 13:48:05.161407  608917 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 13:48:05.161478  608917 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 13:48:05.161607  608917 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:48:05.161758  608917 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:48:05.161894  608917 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 13:48:05.162009  608917 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:48:05.163691  608917 out.go:252]   - Generating certificates and keys ...
	I1124 13:48:05.163805  608917 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 13:48:05.163947  608917 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 13:48:05.164054  608917 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 13:48:05.164154  608917 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 13:48:05.164250  608917 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 13:48:05.164325  608917 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 13:48:05.164403  608917 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 13:48:05.164579  608917 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-608395] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1124 13:48:05.164662  608917 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 13:48:05.164844  608917 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-608395] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1124 13:48:05.164993  608917 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 13:48:05.165088  608917 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 13:48:05.165130  608917 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 13:48:05.165182  608917 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 13:48:05.165250  608917 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 13:48:05.165313  608917 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 13:48:05.165382  608917 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 13:48:05.165456  608917 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 13:48:05.165506  608917 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 13:48:05.165580  608917 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 13:48:05.165637  608917 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 13:48:05.167858  608917 out.go:252]   - Booting up control plane ...
	I1124 13:48:05.167962  608917 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 13:48:05.168043  608917 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 13:48:05.168104  608917 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 13:48:05.168199  608917 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 13:48:05.168298  608917 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 13:48:05.168436  608917 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 13:48:05.168514  608917 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 13:48:05.168558  608917 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 13:48:05.168715  608917 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 13:48:05.168854  608917 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 13:48:05.168953  608917 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001985013s
	I1124 13:48:05.169093  608917 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 13:48:05.169202  608917 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1124 13:48:05.169339  608917 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 13:48:05.169461  608917 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 13:48:05.169582  608917 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.171045551s
	I1124 13:48:05.169691  608917 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.746683308s
	I1124 13:48:05.169782  608917 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.002983514s
	I1124 13:48:05.169958  608917 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 13:48:05.170079  608917 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 13:48:05.170136  608917 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 13:48:05.170449  608917 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-608395 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 13:48:05.170534  608917 kubeadm.go:319] [bootstrap-token] Using token: 0m3tk6.bp5t9g266aj6zg5e
	I1124 13:48:05.172344  608917 out.go:252]   - Configuring RBAC rules ...
	I1124 13:48:05.172497  608917 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 13:48:05.172606  608917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 13:48:05.172790  608917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 13:48:05.172947  608917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 13:48:05.173067  608917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 13:48:05.173152  608917 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 13:48:05.173251  608917 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 13:48:05.173290  608917 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 13:48:05.173330  608917 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 13:48:05.173336  608917 kubeadm.go:319] 
	I1124 13:48:05.173391  608917 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 13:48:05.173397  608917 kubeadm.go:319] 
	I1124 13:48:05.173470  608917 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 13:48:05.173476  608917 kubeadm.go:319] 
	I1124 13:48:05.173498  608917 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 13:48:05.173553  608917 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 13:48:05.173610  608917 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 13:48:05.173623  608917 kubeadm.go:319] 
	I1124 13:48:05.173669  608917 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 13:48:05.173675  608917 kubeadm.go:319] 
	I1124 13:48:05.173718  608917 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 13:48:05.173727  608917 kubeadm.go:319] 
	I1124 13:48:05.173778  608917 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 13:48:05.173858  608917 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 13:48:05.173981  608917 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 13:48:05.173990  608917 kubeadm.go:319] 
	I1124 13:48:05.174085  608917 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 13:48:05.174165  608917 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 13:48:05.174170  608917 kubeadm.go:319] 
	I1124 13:48:05.174250  608917 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0m3tk6.bp5t9g266aj6zg5e \
	I1124 13:48:05.174352  608917 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:32fb1839a00503b33822b75b81c2f42d5061d18404c0a5cd12189dec7e20658c \
	I1124 13:48:05.174376  608917 kubeadm.go:319] 	--control-plane 
	I1124 13:48:05.174381  608917 kubeadm.go:319] 
	I1124 13:48:05.174459  608917 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 13:48:05.174465  608917 kubeadm.go:319] 
	I1124 13:48:05.174560  608917 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0m3tk6.bp5t9g266aj6zg5e \
	I1124 13:48:05.174802  608917 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:32fb1839a00503b33822b75b81c2f42d5061d18404c0a5cd12189dec7e20658c 
	I1124 13:48:05.174826  608917 cni.go:84] Creating CNI manager for ""
	I1124 13:48:05.174836  608917 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:48:05.177484  608917 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 13:48:05.137677  607669 addons.go:530] duration metric: took 608.290782ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 13:48:01.553682  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:02.346718  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:51122->192.168.76.2:8443: read: connection reset by peer
	I1124 13:48:02.346797  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:02.346868  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:02.379430  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:02.379461  572647 cri.go:89] found id: "6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:48:02.379468  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:02.379472  572647 cri.go:89] found id: ""
	I1124 13:48:02.379481  572647 logs.go:282] 3 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:02.379554  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.384666  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.389028  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.393413  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:02.393493  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:02.423298  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:02.423317  572647 cri.go:89] found id: ""
	I1124 13:48:02.423325  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:02.423377  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.428323  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:02.428396  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:02.458971  572647 cri.go:89] found id: ""
	I1124 13:48:02.459002  572647 logs.go:282] 0 containers: []
	W1124 13:48:02.459014  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:02.459023  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:02.459136  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:02.495221  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:02.495253  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:02.495258  572647 cri.go:89] found id: ""
	I1124 13:48:02.495267  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:02.495325  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.504536  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.513709  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:02.513782  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:02.545556  572647 cri.go:89] found id: ""
	I1124 13:48:02.545589  572647 logs.go:282] 0 containers: []
	W1124 13:48:02.545603  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:02.545613  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:02.545686  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:02.575683  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:02.575710  572647 cri.go:89] found id: "daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:48:02.575714  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:02.575717  572647 cri.go:89] found id: ""
	I1124 13:48:02.575725  572647 logs.go:282] 3 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:02.575799  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.580340  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.584784  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.588717  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:02.588774  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:02.617522  572647 cri.go:89] found id: ""
	I1124 13:48:02.617550  572647 logs.go:282] 0 containers: []
	W1124 13:48:02.617558  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:02.617567  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:02.617616  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:02.647375  572647 cri.go:89] found id: ""
	I1124 13:48:02.647407  572647 logs.go:282] 0 containers: []
	W1124 13:48:02.647418  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:02.647432  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:02.647445  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:02.685850  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:02.685900  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:02.794118  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:02.794164  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:02.866960  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:02.866982  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:02.866997  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:02.908627  572647 logs.go:123] Gathering logs for kube-apiserver [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8] ...
	I1124 13:48:02.908671  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:48:02.949348  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:02.949380  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:02.997498  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:02.997541  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:03.065816  572647 logs.go:123] Gathering logs for kube-controller-manager [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf] ...
	I1124 13:48:03.065856  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:48:03.101360  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:03.101393  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:03.140140  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:03.140183  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:03.160020  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:03.160058  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:03.202092  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:03.202136  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:03.247020  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:03.247060  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:03.283475  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:03.283518  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:05.832996  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:05.833478  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:05.833543  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:05.833607  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:05.862229  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:05.862254  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:05.862258  572647 cri.go:89] found id: ""
	I1124 13:48:05.862267  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:05.862320  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:05.867091  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:05.871378  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:05.871455  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:05.900338  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:05.900361  572647 cri.go:89] found id: ""
	I1124 13:48:05.900370  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:05.900428  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:05.904531  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:05.904606  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:05.933536  572647 cri.go:89] found id: ""
	I1124 13:48:05.933565  572647 logs.go:282] 0 containers: []
	W1124 13:48:05.933579  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:05.933587  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:05.933645  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:05.961942  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:05.961966  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:05.961980  572647 cri.go:89] found id: ""
	I1124 13:48:05.961988  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:05.962048  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:05.966413  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:05.970560  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:05.970640  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:05.999021  572647 cri.go:89] found id: ""
	I1124 13:48:05.999046  572647 logs.go:282] 0 containers: []
	W1124 13:48:05.999057  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:05.999065  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:05.999125  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:06.030192  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:06.030216  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:06.030222  572647 cri.go:89] found id: ""
	I1124 13:48:06.030233  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:06.030291  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:06.034509  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:06.038518  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:06.038602  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:06.067432  572647 cri.go:89] found id: ""
	I1124 13:48:06.067459  572647 logs.go:282] 0 containers: []
	W1124 13:48:06.067469  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:06.067477  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:06.067557  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:06.098683  572647 cri.go:89] found id: ""
	I1124 13:48:06.098712  572647 logs.go:282] 0 containers: []
	W1124 13:48:06.098723  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:06.098736  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:06.098753  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:06.163737  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:06.163765  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:06.163783  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:05.179143  608917 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 13:48:05.184780  608917 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 13:48:05.184802  608917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 13:48:05.199547  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 13:48:05.451312  608917 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 13:48:05.451481  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:05.451599  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-608395 minikube.k8s.io/updated_at=2025_11_24T13_48_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=no-preload-608395 minikube.k8s.io/primary=true
	I1124 13:48:05.479434  608917 ops.go:34] apiserver oom_adj: -16
	I1124 13:48:05.560179  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:06.061204  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:06.560802  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:07.061219  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:07.561139  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:08.061015  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:08.561034  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:09.061268  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:09.560397  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:09.636185  608917 kubeadm.go:1114] duration metric: took 4.184744627s to wait for elevateKubeSystemPrivileges
	I1124 13:48:09.636235  608917 kubeadm.go:403] duration metric: took 14.511667218s to StartCluster
	I1124 13:48:09.636257  608917 settings.go:142] acquiring lock: {Name:mka599a3c9bae62ffb84d261186583052ce40f68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:48:09.636332  608917 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:48:09.637980  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/kubeconfig: {Name:mk44e8f04ffd8592063c19ad1e339ad14aaa66a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:48:09.638233  608917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 13:48:09.638262  608917 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 13:48:09.638340  608917 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 13:48:09.638439  608917 addons.go:70] Setting storage-provisioner=true in profile "no-preload-608395"
	I1124 13:48:09.638460  608917 addons.go:239] Setting addon storage-provisioner=true in "no-preload-608395"
	I1124 13:48:09.638459  608917 addons.go:70] Setting default-storageclass=true in profile "no-preload-608395"
	I1124 13:48:09.638486  608917 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-608395"
	I1124 13:48:09.638512  608917 host.go:66] Checking if "no-preload-608395" exists ...
	I1124 13:48:09.638608  608917 config.go:182] Loaded profile config "no-preload-608395": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:48:09.638889  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:48:09.639090  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:48:09.640719  608917 out.go:179] * Verifying Kubernetes components...
	I1124 13:48:09.642235  608917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:48:09.665980  608917 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:48:09.668239  608917 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:48:09.668262  608917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 13:48:09.668334  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:48:09.668545  608917 addons.go:239] Setting addon default-storageclass=true in "no-preload-608395"
	I1124 13:48:09.668594  608917 host.go:66] Checking if "no-preload-608395" exists ...
	I1124 13:48:09.669115  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:48:09.708052  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:48:09.711213  608917 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 13:48:09.711236  608917 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 13:48:09.711297  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:48:09.737250  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:48:09.745340  608917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 13:48:09.808489  608917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:48:09.832661  608917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:48:09.863280  608917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 13:48:09.941101  608917 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1124 13:48:09.942521  608917 node_ready.go:35] waiting up to 6m0s for node "no-preload-608395" to be "Ready" ...
	I1124 13:48:10.163475  608917 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 13:48:05.418106  607669 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-513442" context rescaled to 1 replicas
	W1124 13:48:06.917478  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	W1124 13:48:09.417409  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	I1124 13:48:06.199640  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:06.199675  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:06.235793  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:06.235827  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:06.290172  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:06.290212  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:06.325935  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:06.325975  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:06.359485  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:06.359523  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:06.406787  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:06.406834  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:06.503206  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:06.503251  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:06.520877  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:06.520924  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:06.561472  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:06.561510  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:06.591722  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:06.591748  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:09.128043  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:09.128549  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:09.128609  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:09.128678  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:09.158194  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:09.158216  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:09.158220  572647 cri.go:89] found id: ""
	I1124 13:48:09.158229  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:09.158308  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.162575  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.167402  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:09.167472  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:09.196608  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:09.196633  572647 cri.go:89] found id: ""
	I1124 13:48:09.196645  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:09.196709  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.201107  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:09.201190  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:09.232265  572647 cri.go:89] found id: ""
	I1124 13:48:09.232300  572647 logs.go:282] 0 containers: []
	W1124 13:48:09.232311  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:09.232320  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:09.232386  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:09.272990  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:09.273017  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:09.273022  572647 cri.go:89] found id: ""
	I1124 13:48:09.273033  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:09.273100  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.278614  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.283409  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:09.283485  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:09.314562  572647 cri.go:89] found id: ""
	I1124 13:48:09.314592  572647 logs.go:282] 0 containers: []
	W1124 13:48:09.314604  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:09.314611  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:09.314682  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:09.346903  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:09.346963  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:09.346970  572647 cri.go:89] found id: ""
	I1124 13:48:09.346979  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:09.347049  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.351444  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.355601  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:09.355675  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:09.387667  572647 cri.go:89] found id: ""
	I1124 13:48:09.387697  572647 logs.go:282] 0 containers: []
	W1124 13:48:09.387709  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:09.387716  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:09.387779  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:09.417828  572647 cri.go:89] found id: ""
	I1124 13:48:09.417854  572647 logs.go:282] 0 containers: []
	W1124 13:48:09.417863  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:09.417876  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:09.417894  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:09.518663  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:09.518707  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:09.538049  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:09.538093  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:09.606209  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:09.606232  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:09.606246  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:09.646703  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:09.646736  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:09.708037  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:09.708078  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:09.779698  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:09.779735  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:09.819613  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:09.819663  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:09.867349  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:09.867388  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:09.917580  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:09.917620  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:09.959751  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:09.959793  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:10.006236  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:10.006274  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:10.165110  608917 addons.go:530] duration metric: took 526.764143ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 13:48:10.444998  608917 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-608395" context rescaled to 1 replicas
	W1124 13:48:11.948043  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	W1124 13:48:14.445721  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	W1124 13:48:11.417485  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	W1124 13:48:13.418201  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	I1124 13:48:12.563487  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:12.564031  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:12.564091  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:12.564151  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:12.598524  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:12.598553  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:12.598559  572647 cri.go:89] found id: ""
	I1124 13:48:12.598570  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:12.598654  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.603466  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.608383  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:12.608462  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:12.652395  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:12.652422  572647 cri.go:89] found id: ""
	I1124 13:48:12.652433  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:12.652503  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.657966  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:12.658060  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:12.693432  572647 cri.go:89] found id: ""
	I1124 13:48:12.693468  572647 logs.go:282] 0 containers: []
	W1124 13:48:12.693480  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:12.693489  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:12.693558  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:12.731546  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:12.731572  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:12.731579  572647 cri.go:89] found id: ""
	I1124 13:48:12.731590  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:12.731820  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.737055  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.741859  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:12.741953  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:12.776627  572647 cri.go:89] found id: ""
	I1124 13:48:12.776652  572647 logs.go:282] 0 containers: []
	W1124 13:48:12.776660  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:12.776667  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:12.776735  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:12.809077  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:12.809099  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:12.809102  572647 cri.go:89] found id: ""
	I1124 13:48:12.809112  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:12.809166  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.813963  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.818488  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:12.818563  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:12.852844  572647 cri.go:89] found id: ""
	I1124 13:48:12.852879  572647 logs.go:282] 0 containers: []
	W1124 13:48:12.852891  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:12.852900  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:12.853034  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:12.889177  572647 cri.go:89] found id: ""
	I1124 13:48:12.889228  572647 logs.go:282] 0 containers: []
	W1124 13:48:12.889240  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:12.889255  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:12.889278  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:12.941108  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:12.941146  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:13.012950  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:13.012998  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:13.059324  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:13.059367  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:13.096188  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:13.096235  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:13.157287  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:13.157338  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:13.198203  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:13.198250  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:13.219729  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:13.219773  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:13.293315  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:13.293338  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:13.293356  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:13.338975  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:13.339029  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:13.385546  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:13.385596  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:13.427130  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:13.427162  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:16.027717  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:16.028251  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:16.028310  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:16.028363  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:16.058811  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:16.058839  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:16.058847  572647 cri.go:89] found id: ""
	I1124 13:48:16.058858  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:16.058999  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.063797  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.068208  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:16.068282  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:16.097374  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:16.097404  572647 cri.go:89] found id: ""
	I1124 13:48:16.097416  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:16.097484  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.102967  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:16.103045  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:16.133626  572647 cri.go:89] found id: ""
	I1124 13:48:16.133660  572647 logs.go:282] 0 containers: []
	W1124 13:48:16.133670  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:16.133676  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:16.133746  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:16.165392  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:16.165424  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:16.165431  572647 cri.go:89] found id: ""
	I1124 13:48:16.165442  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:16.165507  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.170277  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.174579  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:16.174661  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	W1124 13:48:16.445831  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	W1124 13:48:18.945868  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	W1124 13:48:15.917184  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	W1124 13:48:17.917526  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	I1124 13:48:19.416721  607669 node_ready.go:49] node "old-k8s-version-513442" is "Ready"
	I1124 13:48:19.416760  607669 node_ready.go:38] duration metric: took 14.503103561s for node "old-k8s-version-513442" to be "Ready" ...
	I1124 13:48:19.416778  607669 api_server.go:52] waiting for apiserver process to appear ...
	I1124 13:48:19.416833  607669 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:48:19.430267  607669 api_server.go:72] duration metric: took 14.90093273s to wait for apiserver process to appear ...
	I1124 13:48:19.430299  607669 api_server.go:88] waiting for apiserver healthz status ...
	I1124 13:48:19.430326  607669 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 13:48:19.436844  607669 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 13:48:19.438582  607669 api_server.go:141] control plane version: v1.28.0
	I1124 13:48:19.438618  607669 api_server.go:131] duration metric: took 8.311152ms to wait for apiserver health ...
	I1124 13:48:19.438632  607669 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 13:48:19.443134  607669 system_pods.go:59] 8 kube-system pods found
	I1124 13:48:19.443191  607669 system_pods.go:61] "coredns-5dd5756b68-b5rrl" [4e6c9b7c-5f0a-4c60-8197-20e985a07403] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:19.443200  607669 system_pods.go:61] "etcd-old-k8s-version-513442" [0b1a1913-a17b-4362-af66-49436a831759] Running
	I1124 13:48:19.443207  607669 system_pods.go:61] "kindnet-tpjvb" [c7df115a-8394-4f80-ac6c-5b1fc95337b5] Running
	I1124 13:48:19.443213  607669 system_pods.go:61] "kube-apiserver-old-k8s-version-513442" [722a96a1-58fb-4240-9c3b-4732b2fc0877] Running
	I1124 13:48:19.443219  607669 system_pods.go:61] "kube-controller-manager-old-k8s-version-513442" [df7953a7-c9cf-4854-b6bb-c43b0415e709] Running
	I1124 13:48:19.443225  607669 system_pods.go:61] "kube-proxy-hzfcx" [f4ba208a-1a78-46ae-9684-ff3309400852] Running
	I1124 13:48:19.443231  607669 system_pods.go:61] "kube-scheduler-old-k8s-version-513442" [c400bc97-a209-437d-ba96-60c58a4b8878] Running
	I1124 13:48:19.443240  607669 system_pods.go:61] "storage-provisioner" [65efb270-100a-4e7c-bee8-24de1df28586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:19.443248  607669 system_pods.go:74] duration metric: took 4.608559ms to wait for pod list to return data ...
	I1124 13:48:19.443260  607669 default_sa.go:34] waiting for default service account to be created ...
	I1124 13:48:19.446125  607669 default_sa.go:45] found service account: "default"
	I1124 13:48:19.446157  607669 default_sa.go:55] duration metric: took 2.890045ms for default service account to be created ...
	I1124 13:48:19.446170  607669 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 13:48:19.450324  607669 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:19.450375  607669 system_pods.go:89] "coredns-5dd5756b68-b5rrl" [4e6c9b7c-5f0a-4c60-8197-20e985a07403] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:19.450385  607669 system_pods.go:89] "etcd-old-k8s-version-513442" [0b1a1913-a17b-4362-af66-49436a831759] Running
	I1124 13:48:19.450394  607669 system_pods.go:89] "kindnet-tpjvb" [c7df115a-8394-4f80-ac6c-5b1fc95337b5] Running
	I1124 13:48:19.450408  607669 system_pods.go:89] "kube-apiserver-old-k8s-version-513442" [722a96a1-58fb-4240-9c3b-4732b2fc0877] Running
	I1124 13:48:19.450415  607669 system_pods.go:89] "kube-controller-manager-old-k8s-version-513442" [df7953a7-c9cf-4854-b6bb-c43b0415e709] Running
	I1124 13:48:19.450425  607669 system_pods.go:89] "kube-proxy-hzfcx" [f4ba208a-1a78-46ae-9684-ff3309400852] Running
	I1124 13:48:19.450434  607669 system_pods.go:89] "kube-scheduler-old-k8s-version-513442" [c400bc97-a209-437d-ba96-60c58a4b8878] Running
	I1124 13:48:19.450449  607669 system_pods.go:89] "storage-provisioner" [65efb270-100a-4e7c-bee8-24de1df28586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:19.450484  607669 retry.go:31] will retry after 306.547577ms: missing components: kube-dns
	I1124 13:48:19.761785  607669 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:19.761821  607669 system_pods.go:89] "coredns-5dd5756b68-b5rrl" [4e6c9b7c-5f0a-4c60-8197-20e985a07403] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:19.761828  607669 system_pods.go:89] "etcd-old-k8s-version-513442" [0b1a1913-a17b-4362-af66-49436a831759] Running
	I1124 13:48:19.761835  607669 system_pods.go:89] "kindnet-tpjvb" [c7df115a-8394-4f80-ac6c-5b1fc95337b5] Running
	I1124 13:48:19.761839  607669 system_pods.go:89] "kube-apiserver-old-k8s-version-513442" [722a96a1-58fb-4240-9c3b-4732b2fc0877] Running
	I1124 13:48:19.761843  607669 system_pods.go:89] "kube-controller-manager-old-k8s-version-513442" [df7953a7-c9cf-4854-b6bb-c43b0415e709] Running
	I1124 13:48:19.761846  607669 system_pods.go:89] "kube-proxy-hzfcx" [f4ba208a-1a78-46ae-9684-ff3309400852] Running
	I1124 13:48:19.761850  607669 system_pods.go:89] "kube-scheduler-old-k8s-version-513442" [c400bc97-a209-437d-ba96-60c58a4b8878] Running
	I1124 13:48:19.761855  607669 system_pods.go:89] "storage-provisioner" [65efb270-100a-4e7c-bee8-24de1df28586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:19.761871  607669 retry.go:31] will retry after 263.639636ms: missing components: kube-dns
	I1124 13:48:20.030723  607669 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:20.030764  607669 system_pods.go:89] "coredns-5dd5756b68-b5rrl" [4e6c9b7c-5f0a-4c60-8197-20e985a07403] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:20.030773  607669 system_pods.go:89] "etcd-old-k8s-version-513442" [0b1a1913-a17b-4362-af66-49436a831759] Running
	I1124 13:48:20.030781  607669 system_pods.go:89] "kindnet-tpjvb" [c7df115a-8394-4f80-ac6c-5b1fc95337b5] Running
	I1124 13:48:20.030787  607669 system_pods.go:89] "kube-apiserver-old-k8s-version-513442" [722a96a1-58fb-4240-9c3b-4732b2fc0877] Running
	I1124 13:48:20.030794  607669 system_pods.go:89] "kube-controller-manager-old-k8s-version-513442" [df7953a7-c9cf-4854-b6bb-c43b0415e709] Running
	I1124 13:48:20.030799  607669 system_pods.go:89] "kube-proxy-hzfcx" [f4ba208a-1a78-46ae-9684-ff3309400852] Running
	I1124 13:48:20.030804  607669 system_pods.go:89] "kube-scheduler-old-k8s-version-513442" [c400bc97-a209-437d-ba96-60c58a4b8878] Running
	I1124 13:48:20.030812  607669 system_pods.go:89] "storage-provisioner" [65efb270-100a-4e7c-bee8-24de1df28586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:20.030836  607669 retry.go:31] will retry after 485.23875ms: missing components: kube-dns
	I1124 13:48:16.203971  572647 cri.go:89] found id: ""
	I1124 13:48:16.204004  572647 logs.go:282] 0 containers: []
	W1124 13:48:16.204016  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:16.204025  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:16.204087  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:16.233087  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:16.233113  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:16.233119  572647 cri.go:89] found id: ""
	I1124 13:48:16.233130  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:16.233184  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.237937  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.242366  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:16.242450  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:16.273007  572647 cri.go:89] found id: ""
	I1124 13:48:16.273034  572647 logs.go:282] 0 containers: []
	W1124 13:48:16.273043  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:16.273049  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:16.273100  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:16.302483  572647 cri.go:89] found id: ""
	I1124 13:48:16.302518  572647 logs.go:282] 0 containers: []
	W1124 13:48:16.302537  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:16.302553  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:16.302575  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:16.360777  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:16.360817  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:16.391672  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:16.391700  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:16.490704  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:16.490743  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:16.530411  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:16.530448  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:16.567070  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:16.567107  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:16.601689  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:16.601728  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:16.646105  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:16.646143  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:16.682522  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:16.682560  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:16.699850  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:16.699887  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:16.759811  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:16.759835  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:16.759853  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:16.795013  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:16.795048  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:19.334057  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:19.334568  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:19.334661  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:19.334733  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:19.365714  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:19.365735  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:19.365739  572647 cri.go:89] found id: ""
	I1124 13:48:19.365747  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:19.365800  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.370354  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.374856  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:19.374992  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:19.405492  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:19.405519  572647 cri.go:89] found id: ""
	I1124 13:48:19.405529  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:19.405589  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.411364  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:19.411426  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:19.443360  572647 cri.go:89] found id: ""
	I1124 13:48:19.443391  572647 logs.go:282] 0 containers: []
	W1124 13:48:19.443404  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:19.443412  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:19.443477  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:19.475298  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:19.475324  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:19.475331  572647 cri.go:89] found id: ""
	I1124 13:48:19.475341  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:19.475407  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.480369  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.484782  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:19.484863  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:19.514622  572647 cri.go:89] found id: ""
	I1124 13:48:19.514666  572647 logs.go:282] 0 containers: []
	W1124 13:48:19.514716  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:19.514726  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:19.514807  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:19.550847  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:19.550872  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:19.550877  572647 cri.go:89] found id: ""
	I1124 13:48:19.550886  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:19.550963  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.556478  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.561320  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:19.561401  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:19.596190  572647 cri.go:89] found id: ""
	I1124 13:48:19.596226  572647 logs.go:282] 0 containers: []
	W1124 13:48:19.596238  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:19.596247  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:19.596309  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:19.627382  572647 cri.go:89] found id: ""
	I1124 13:48:19.627413  572647 logs.go:282] 0 containers: []
	W1124 13:48:19.627424  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:19.627436  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:19.627452  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:19.694796  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:19.694836  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:19.752858  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:19.752896  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:19.788182  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:19.788224  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:19.879216  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:19.879255  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:19.940757  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:19.940776  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:19.940790  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:19.979681  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:19.979726  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:20.020042  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:20.020085  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:20.064463  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:20.064499  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:20.098012  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:20.098044  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:20.132122  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:20.132157  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:20.148958  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:20.148997  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:20.521094  607669 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:20.521123  607669 system_pods.go:89] "coredns-5dd5756b68-b5rrl" [4e6c9b7c-5f0a-4c60-8197-20e985a07403] Running
	I1124 13:48:20.521130  607669 system_pods.go:89] "etcd-old-k8s-version-513442" [0b1a1913-a17b-4362-af66-49436a831759] Running
	I1124 13:48:20.521133  607669 system_pods.go:89] "kindnet-tpjvb" [c7df115a-8394-4f80-ac6c-5b1fc95337b5] Running
	I1124 13:48:20.521137  607669 system_pods.go:89] "kube-apiserver-old-k8s-version-513442" [722a96a1-58fb-4240-9c3b-4732b2fc0877] Running
	I1124 13:48:20.521141  607669 system_pods.go:89] "kube-controller-manager-old-k8s-version-513442" [df7953a7-c9cf-4854-b6bb-c43b0415e709] Running
	I1124 13:48:20.521145  607669 system_pods.go:89] "kube-proxy-hzfcx" [f4ba208a-1a78-46ae-9684-ff3309400852] Running
	I1124 13:48:20.521148  607669 system_pods.go:89] "kube-scheduler-old-k8s-version-513442" [c400bc97-a209-437d-ba96-60c58a4b8878] Running
	I1124 13:48:20.521151  607669 system_pods.go:89] "storage-provisioner" [65efb270-100a-4e7c-bee8-24de1df28586] Running
	I1124 13:48:20.521159  607669 system_pods.go:126] duration metric: took 1.074982882s to wait for k8s-apps to be running ...
	I1124 13:48:20.521166  607669 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 13:48:20.521215  607669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:48:20.535666  607669 system_svc.go:56] duration metric: took 14.486184ms WaitForService to wait for kubelet
	I1124 13:48:20.535706  607669 kubeadm.go:587] duration metric: took 16.006375183s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:48:20.535732  607669 node_conditions.go:102] verifying NodePressure condition ...
	I1124 13:48:20.538619  607669 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 13:48:20.538646  607669 node_conditions.go:123] node cpu capacity is 8
	I1124 13:48:20.538662  607669 node_conditions.go:105] duration metric: took 2.9245ms to run NodePressure ...
	I1124 13:48:20.538676  607669 start.go:242] waiting for startup goroutines ...
	I1124 13:48:20.538683  607669 start.go:247] waiting for cluster config update ...
	I1124 13:48:20.538693  607669 start.go:256] writing updated cluster config ...
	I1124 13:48:20.539040  607669 ssh_runner.go:195] Run: rm -f paused
	I1124 13:48:20.543325  607669 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:48:20.547793  607669 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-b5rrl" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.552447  607669 pod_ready.go:94] pod "coredns-5dd5756b68-b5rrl" is "Ready"
	I1124 13:48:20.552472  607669 pod_ready.go:86] duration metric: took 4.651627ms for pod "coredns-5dd5756b68-b5rrl" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.556328  607669 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.561689  607669 pod_ready.go:94] pod "etcd-old-k8s-version-513442" is "Ready"
	I1124 13:48:20.561717  607669 pod_ready.go:86] duration metric: took 5.363766ms for pod "etcd-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.564634  607669 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.569265  607669 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-513442" is "Ready"
	I1124 13:48:20.569291  607669 pod_ready.go:86] duration metric: took 4.631558ms for pod "kube-apiserver-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.572304  607669 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.948397  607669 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-513442" is "Ready"
	I1124 13:48:20.948423  607669 pod_ready.go:86] duration metric: took 376.095956ms for pod "kube-controller-manager-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:21.148648  607669 pod_ready.go:83] waiting for pod "kube-proxy-hzfcx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:21.548255  607669 pod_ready.go:94] pod "kube-proxy-hzfcx" is "Ready"
	I1124 13:48:21.548288  607669 pod_ready.go:86] duration metric: took 399.608636ms for pod "kube-proxy-hzfcx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:21.748744  607669 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:22.147789  607669 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-513442" is "Ready"
	I1124 13:48:22.147821  607669 pod_ready.go:86] duration metric: took 399.0528ms for pod "kube-scheduler-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:22.147833  607669 pod_ready.go:40] duration metric: took 1.604464617s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:48:22.193883  607669 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 13:48:22.196207  607669 out.go:203] 
	W1124 13:48:22.197964  607669 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 13:48:22.199516  607669 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 13:48:22.201541  607669 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-513442" cluster and "default" namespace by default
	W1124 13:48:20.947014  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	W1124 13:48:22.948554  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	I1124 13:48:24.446130  608917 node_ready.go:49] node "no-preload-608395" is "Ready"
	I1124 13:48:24.446168  608917 node_ready.go:38] duration metric: took 14.503611427s for node "no-preload-608395" to be "Ready" ...
	I1124 13:48:24.446195  608917 api_server.go:52] waiting for apiserver process to appear ...
	I1124 13:48:24.446254  608917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:48:24.460952  608917 api_server.go:72] duration metric: took 14.82264088s to wait for apiserver process to appear ...
	I1124 13:48:24.460990  608917 api_server.go:88] waiting for apiserver healthz status ...
	I1124 13:48:24.461021  608917 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 13:48:24.466768  608917 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1124 13:48:24.468117  608917 api_server.go:141] control plane version: v1.34.1
	I1124 13:48:24.468151  608917 api_server.go:131] duration metric: took 7.151862ms to wait for apiserver health ...
	I1124 13:48:24.468164  608917 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 13:48:24.473836  608917 system_pods.go:59] 8 kube-system pods found
	I1124 13:48:24.473891  608917 system_pods.go:61] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:24.473901  608917 system_pods.go:61] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:24.473965  608917 system_pods.go:61] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:24.473980  608917 system_pods.go:61] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:24.473987  608917 system_pods.go:61] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:24.473995  608917 system_pods.go:61] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:24.474001  608917 system_pods.go:61] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:24.474011  608917 system_pods.go:61] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:24.474025  608917 system_pods.go:74] duration metric: took 5.853076ms to wait for pod list to return data ...
	I1124 13:48:24.474037  608917 default_sa.go:34] waiting for default service account to be created ...
	I1124 13:48:24.476681  608917 default_sa.go:45] found service account: "default"
	I1124 13:48:24.476712  608917 default_sa.go:55] duration metric: took 2.661232ms for default service account to be created ...
	I1124 13:48:24.476724  608917 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 13:48:24.479715  608917 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:24.479757  608917 system_pods.go:89] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:24.479765  608917 system_pods.go:89] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:24.479776  608917 system_pods.go:89] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:24.479782  608917 system_pods.go:89] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:24.479788  608917 system_pods.go:89] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:24.479793  608917 system_pods.go:89] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:24.479798  608917 system_pods.go:89] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:24.479806  608917 system_pods.go:89] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:24.479831  608917 retry.go:31] will retry after 257.034103ms: missing components: kube-dns
	I1124 13:48:24.740811  608917 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:24.740842  608917 system_pods.go:89] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:24.740848  608917 system_pods.go:89] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:24.740854  608917 system_pods.go:89] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:24.740858  608917 system_pods.go:89] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:24.740863  608917 system_pods.go:89] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:24.740866  608917 system_pods.go:89] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:24.740869  608917 system_pods.go:89] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:24.740876  608917 system_pods.go:89] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:24.740892  608917 retry.go:31] will retry after 244.335921ms: missing components: kube-dns
	I1124 13:48:24.989021  608917 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:24.989054  608917 system_pods.go:89] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:24.989061  608917 system_pods.go:89] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:24.989067  608917 system_pods.go:89] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:24.989072  608917 system_pods.go:89] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:24.989077  608917 system_pods.go:89] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:24.989080  608917 system_pods.go:89] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:24.989084  608917 system_pods.go:89] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:24.989089  608917 system_pods.go:89] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:24.989104  608917 retry.go:31] will retry after 431.238044ms: missing components: kube-dns
	I1124 13:48:22.686011  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:22.686450  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:22.686506  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:22.686563  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:22.718842  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:22.718868  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:22.718874  572647 cri.go:89] found id: ""
	I1124 13:48:22.718885  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:22.719025  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.724051  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.728627  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:22.728697  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:22.758279  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:22.758305  572647 cri.go:89] found id: ""
	I1124 13:48:22.758315  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:22.758378  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.762905  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:22.763025  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:22.796176  572647 cri.go:89] found id: ""
	I1124 13:48:22.796207  572647 logs.go:282] 0 containers: []
	W1124 13:48:22.796218  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:22.796227  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:22.796293  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:22.828770  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:22.828801  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:22.828815  572647 cri.go:89] found id: ""
	I1124 13:48:22.828827  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:22.828886  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.833530  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.837668  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:22.837750  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:22.867760  572647 cri.go:89] found id: ""
	I1124 13:48:22.867793  572647 logs.go:282] 0 containers: []
	W1124 13:48:22.867806  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:22.867815  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:22.867976  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:22.899275  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:22.899305  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:22.899312  572647 cri.go:89] found id: ""
	I1124 13:48:22.899327  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:22.899391  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.903859  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.908121  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:22.908190  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:22.938883  572647 cri.go:89] found id: ""
	I1124 13:48:22.938961  572647 logs.go:282] 0 containers: []
	W1124 13:48:22.938972  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:22.938980  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:22.939033  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:22.969840  572647 cri.go:89] found id: ""
	I1124 13:48:22.969864  572647 logs.go:282] 0 containers: []
	W1124 13:48:22.969872  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:22.969882  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:22.969903  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:23.031386  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:23.031411  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:23.031425  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:23.067770  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:23.067805  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:23.104851  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:23.104886  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:23.160621  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:23.160668  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:23.190994  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:23.191026  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:23.226509  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:23.226542  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:23.269082  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:23.269130  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:23.360572  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:23.360613  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:23.399049  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:23.399089  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:23.440241  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:23.440282  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:23.474172  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:23.474212  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:25.992569  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:25.993167  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:25.993241  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:25.993310  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:26.021789  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:26.021816  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:26.021823  572647 cri.go:89] found id: ""
	I1124 13:48:26.021834  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:26.021985  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.027084  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.031267  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:26.031350  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:26.063349  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:26.063379  572647 cri.go:89] found id: ""
	I1124 13:48:26.063390  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:26.063448  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.068064  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:26.068140  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:26.096106  572647 cri.go:89] found id: ""
	I1124 13:48:26.096148  572647 logs.go:282] 0 containers: []
	W1124 13:48:26.096158  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:26.096165  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:26.096220  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:26.126156  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:26.126186  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:26.126193  572647 cri.go:89] found id: ""
	I1124 13:48:26.126205  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:26.126275  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.131369  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.135595  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:26.135657  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:26.163133  572647 cri.go:89] found id: ""
	I1124 13:48:26.163161  572647 logs.go:282] 0 containers: []
	W1124 13:48:26.163169  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:26.163187  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:26.163244  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:26.192355  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:26.192378  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:26.192384  572647 cri.go:89] found id: ""
	I1124 13:48:26.192394  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:26.192549  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.197316  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:25.424597  608917 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:25.424631  608917 system_pods.go:89] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:25.424636  608917 system_pods.go:89] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:25.424642  608917 system_pods.go:89] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:25.424646  608917 system_pods.go:89] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:25.424650  608917 system_pods.go:89] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:25.424653  608917 system_pods.go:89] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:25.424656  608917 system_pods.go:89] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:25.424663  608917 system_pods.go:89] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:25.424679  608917 retry.go:31] will retry after 458.014987ms: missing components: kube-dns
	I1124 13:48:25.886603  608917 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:25.886633  608917 system_pods.go:89] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Running
	I1124 13:48:25.886641  608917 system_pods.go:89] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:25.886644  608917 system_pods.go:89] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:25.886649  608917 system_pods.go:89] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:25.886653  608917 system_pods.go:89] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:25.886657  608917 system_pods.go:89] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:25.886660  608917 system_pods.go:89] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:25.886663  608917 system_pods.go:89] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Running
	I1124 13:48:25.886671  608917 system_pods.go:126] duration metric: took 1.409940532s to wait for k8s-apps to be running ...
	I1124 13:48:25.886680  608917 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 13:48:25.886726  608917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:48:25.901294  608917 system_svc.go:56] duration metric: took 14.604723ms WaitForService to wait for kubelet
	I1124 13:48:25.901324  608917 kubeadm.go:587] duration metric: took 16.26302303s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:48:25.901343  608917 node_conditions.go:102] verifying NodePressure condition ...
	I1124 13:48:25.904190  608917 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 13:48:25.904219  608917 node_conditions.go:123] node cpu capacity is 8
	I1124 13:48:25.904234  608917 node_conditions.go:105] duration metric: took 2.88688ms to run NodePressure ...
	I1124 13:48:25.904249  608917 start.go:242] waiting for startup goroutines ...
	I1124 13:48:25.904256  608917 start.go:247] waiting for cluster config update ...
	I1124 13:48:25.904266  608917 start.go:256] writing updated cluster config ...
	I1124 13:48:25.904560  608917 ssh_runner.go:195] Run: rm -f paused
	I1124 13:48:25.909215  608917 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:48:25.912986  608917 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rcf8v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.917301  608917 pod_ready.go:94] pod "coredns-66bc5c9577-rcf8v" is "Ready"
	I1124 13:48:25.917324  608917 pod_ready.go:86] duration metric: took 4.297309ms for pod "coredns-66bc5c9577-rcf8v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.919442  608917 pod_ready.go:83] waiting for pod "etcd-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.923976  608917 pod_ready.go:94] pod "etcd-no-preload-608395" is "Ready"
	I1124 13:48:25.923999  608917 pod_ready.go:86] duration metric: took 4.535115ms for pod "etcd-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.926003  608917 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.930385  608917 pod_ready.go:94] pod "kube-apiserver-no-preload-608395" is "Ready"
	I1124 13:48:25.930413  608917 pod_ready.go:86] duration metric: took 4.382406ms for pod "kube-apiserver-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.932261  608917 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:26.313581  608917 pod_ready.go:94] pod "kube-controller-manager-no-preload-608395" is "Ready"
	I1124 13:48:26.313615  608917 pod_ready.go:86] duration metric: took 381.333887ms for pod "kube-controller-manager-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:26.514064  608917 pod_ready.go:83] waiting for pod "kube-proxy-5vj5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:26.913664  608917 pod_ready.go:94] pod "kube-proxy-5vj5p" is "Ready"
	I1124 13:48:26.913702  608917 pod_ready.go:86] duration metric: took 399.60223ms for pod "kube-proxy-5vj5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:27.114488  608917 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:27.514056  608917 pod_ready.go:94] pod "kube-scheduler-no-preload-608395" is "Ready"
	I1124 13:48:27.514084  608917 pod_ready.go:86] duration metric: took 399.56934ms for pod "kube-scheduler-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:27.514098  608917 pod_ready.go:40] duration metric: took 1.604847792s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:48:27.561310  608917 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 13:48:27.563544  608917 out.go:179] * Done! kubectl is now configured to use "no-preload-608395" cluster and "default" namespace by default
	I1124 13:48:26.202352  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:26.202439  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:26.231899  572647 cri.go:89] found id: ""
	I1124 13:48:26.231953  572647 logs.go:282] 0 containers: []
	W1124 13:48:26.231964  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:26.231973  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:26.232040  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:26.263417  572647 cri.go:89] found id: ""
	I1124 13:48:26.263446  572647 logs.go:282] 0 containers: []
	W1124 13:48:26.263459  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:26.263473  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:26.263488  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:26.354230  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:26.354265  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:26.389608  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:26.389652  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:26.427040  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:26.427077  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:26.466568  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:26.466603  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:26.503710  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:26.503749  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:26.539150  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:26.539193  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:26.583782  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:26.583825  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:26.617656  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:26.617696  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:26.634777  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:26.634809  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:26.693534  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:26.693559  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:26.693577  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:26.748627  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:26.748668  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:29.280171  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:29.280640  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:29.280694  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:29.280748  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:29.309613  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:29.309638  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:29.309644  572647 cri.go:89] found id: ""
	I1124 13:48:29.309660  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:29.309730  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.314623  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.319864  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:29.319962  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:29.348671  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:29.348699  572647 cri.go:89] found id: ""
	I1124 13:48:29.348709  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:29.348775  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.353662  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:29.353728  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:29.383017  572647 cri.go:89] found id: ""
	I1124 13:48:29.383046  572647 logs.go:282] 0 containers: []
	W1124 13:48:29.383058  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:29.383066  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:29.383121  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:29.411238  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:29.411259  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:29.411264  572647 cri.go:89] found id: ""
	I1124 13:48:29.411271  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:29.411325  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.415976  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.420189  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:29.420264  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:29.449856  572647 cri.go:89] found id: ""
	I1124 13:48:29.449890  572647 logs.go:282] 0 containers: []
	W1124 13:48:29.449921  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:29.449929  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:29.450001  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:29.480136  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:29.480164  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:29.480171  572647 cri.go:89] found id: ""
	I1124 13:48:29.480181  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:29.480258  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.484998  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.489433  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:29.489504  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:29.519804  572647 cri.go:89] found id: ""
	I1124 13:48:29.519841  572647 logs.go:282] 0 containers: []
	W1124 13:48:29.519854  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:29.519864  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:29.520048  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:29.549935  572647 cri.go:89] found id: ""
	I1124 13:48:29.549964  572647 logs.go:282] 0 containers: []
	W1124 13:48:29.549974  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:29.549986  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:29.549997  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:29.593521  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:29.593560  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:29.681751  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:29.681792  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:29.699198  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:29.699232  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:29.759823  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:29.759850  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:29.759863  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:29.798497  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:29.798534  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:29.835677  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:29.835718  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:29.864876  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:29.864923  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:29.898153  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:29.898186  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:29.932035  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:29.932073  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:29.971224  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:29.971258  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:30.026576  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:30.026619  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	b44a9a38266a3       56cc512116c8f       10 seconds ago      Running             busybox                   0                   91e7e42c593d0       busybox                                          default
	8d4a4dd9d6632       ead0a4a53df89       16 seconds ago      Running             coredns                   0                   1c930bc4d6523       coredns-5dd5756b68-b5rrl                         kube-system
	c9c8f51adb6bb       6e38f40d628db       16 seconds ago      Running             storage-provisioner       0                   840fae773d68e       storage-provisioner                              kube-system
	1dab1df16e654       409467f978b4a       27 seconds ago      Running             kindnet-cni               0                   30a65fd13bcca       kindnet-tpjvb                                    kube-system
	0b87cfcc163e3       ea1030da44aa1       30 seconds ago      Running             kube-proxy                0                   555af9e11f935       kube-proxy-hzfcx                                 kube-system
	b89c098ff2cb6       bb5e0dde9054c       48 seconds ago      Running             kube-apiserver            0                   b832e9f75c0f1       kube-apiserver-old-k8s-version-513442            kube-system
	f7663d3953f0e       4be79c38a4bab       48 seconds ago      Running             kube-controller-manager   0                   06bb689695cce       kube-controller-manager-old-k8s-version-513442   kube-system
	bdd5c20173350       f6f496300a2ae       48 seconds ago      Running             kube-scheduler            0                   ac1efcdb81d0e       kube-scheduler-old-k8s-version-513442            kube-system
	5793c7fd11b5c       73deb9a3f7025       49 seconds ago      Running             etcd                      0                   3c4129b98c0d7       etcd-old-k8s-version-513442                      kube-system
	
	
	==> containerd <==
	Nov 24 13:48:19 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:19.636050137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-b5rrl,Uid:4e6c9b7c-5f0a-4c60-8197-20e985a07403,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c930bc4d6523dcc2ff99c9243131fcf23dfc7881b09c013bf55e68b23ecf25e\""
	Nov 24 13:48:19 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:19.639799945Z" level=info msg="CreateContainer within sandbox \"1c930bc4d6523dcc2ff99c9243131fcf23dfc7881b09c013bf55e68b23ecf25e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 24 13:48:19 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:19.648881001Z" level=info msg="Container 8d4a4dd9d6632a607a007a0e131e676696c4d059874b38cd47f762f53926ad89: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 13:48:19 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:19.657829357Z" level=info msg="CreateContainer within sandbox \"1c930bc4d6523dcc2ff99c9243131fcf23dfc7881b09c013bf55e68b23ecf25e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8d4a4dd9d6632a607a007a0e131e676696c4d059874b38cd47f762f53926ad89\""
	Nov 24 13:48:19 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:19.658662420Z" level=info msg="StartContainer for \"8d4a4dd9d6632a607a007a0e131e676696c4d059874b38cd47f762f53926ad89\""
	Nov 24 13:48:19 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:19.659800869Z" level=info msg="connecting to shim 8d4a4dd9d6632a607a007a0e131e676696c4d059874b38cd47f762f53926ad89" address="unix:///run/containerd/s/c69a9b00491bdefff20b5fba21aa1d556fb9c3a3bad974c8b8be870ca95e072b" protocol=ttrpc version=3
	Nov 24 13:48:19 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:19.704634320Z" level=info msg="StartContainer for \"c9c8f51adb6bbca8e0f954ad9082c0c66235dce129e152dd682ab69622b44aac\" returns successfully"
	Nov 24 13:48:19 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:19.716701551Z" level=info msg="StartContainer for \"8d4a4dd9d6632a607a007a0e131e676696c4d059874b38cd47f762f53926ad89\" returns successfully"
	Nov 24 13:48:22 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:22.659740340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:e21ee73b-578f-48c9-826d-ab3b4bbb7871,Namespace:default,Attempt:0,}"
	Nov 24 13:48:22 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:22.705643585Z" level=info msg="connecting to shim 91e7e42c593d0f49381ba051fa95a3bffc3c2fedf4ee572f1ee3e65a03cebfff" address="unix:///run/containerd/s/a6973921fa6bbb987fab0736637648be3dc3e077c5046184370bd0c127ef00c4" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 13:48:22 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:22.781316455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:e21ee73b-578f-48c9-826d-ab3b4bbb7871,Namespace:default,Attempt:0,} returns sandbox id \"91e7e42c593d0f49381ba051fa95a3bffc3c2fedf4ee572f1ee3e65a03cebfff\""
	Nov 24 13:48:22 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:22.783364521Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.550927147Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.551949670Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396647"
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.553332639Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.555518804Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.555999909Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.772594905s"
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.556037581Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.557958127Z" level=info msg="CreateContainer within sandbox \"91e7e42c593d0f49381ba051fa95a3bffc3c2fedf4ee572f1ee3e65a03cebfff\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.566156418Z" level=info msg="Container b44a9a38266a36367dda4e29d517101d0bad25018140ed3049b32babe692f605: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.572811164Z" level=info msg="CreateContainer within sandbox \"91e7e42c593d0f49381ba051fa95a3bffc3c2fedf4ee572f1ee3e65a03cebfff\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"b44a9a38266a36367dda4e29d517101d0bad25018140ed3049b32babe692f605\""
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.573543998Z" level=info msg="StartContainer for \"b44a9a38266a36367dda4e29d517101d0bad25018140ed3049b32babe692f605\""
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.574401159Z" level=info msg="connecting to shim b44a9a38266a36367dda4e29d517101d0bad25018140ed3049b32babe692f605" address="unix:///run/containerd/s/a6973921fa6bbb987fab0736637648be3dc3e077c5046184370bd0c127ef00c4" protocol=ttrpc version=3
	Nov 24 13:48:25 old-k8s-version-513442 containerd[663]: time="2025-11-24T13:48:25.628848926Z" level=info msg="StartContainer for \"b44a9a38266a36367dda4e29d517101d0bad25018140ed3049b32babe692f605\" returns successfully"
	Nov 24 13:48:32 old-k8s-version-513442 containerd[663]: E1124 13:48:32.433506     663 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [8d4a4dd9d6632a607a007a0e131e676696c4d059874b38cd47f762f53926ad89] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57003 - 26434 "HINFO IN 1735205229727733014.6660763770011463869. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021751094s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-513442
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-513442
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=old-k8s-version-513442
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_47_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:47:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-513442
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:48:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:48:22 +0000   Mon, 24 Nov 2025 13:47:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:48:22 +0000   Mon, 24 Nov 2025 13:47:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:48:22 +0000   Mon, 24 Nov 2025 13:47:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:48:22 +0000   Mon, 24 Nov 2025 13:48:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-513442
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                7bc159f8-7fe0-4f8d-82dc-0cc733a1645b
	  Boot ID:                    715d4626-373f-499b-b5de-b6d832ce4fe4
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-5dd5756b68-b5rrl                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     31s
	  kube-system                 etcd-old-k8s-version-513442                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         45s
	  kube-system                 kindnet-tpjvb                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-old-k8s-version-513442             250m (3%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-old-k8s-version-513442    200m (2%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-hzfcx                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-old-k8s-version-513442             100m (1%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 30s                kube-proxy       
	  Normal  Starting                 50s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  49s (x8 over 50s)  kubelet          Node old-k8s-version-513442 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 50s)  kubelet          Node old-k8s-version-513442 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x7 over 50s)  kubelet          Node old-k8s-version-513442 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  49s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 44s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  44s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  43s                kubelet          Node old-k8s-version-513442 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s                kubelet          Node old-k8s-version-513442 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s                kubelet          Node old-k8s-version-513442 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s                node-controller  Node old-k8s-version-513442 event: Registered Node old-k8s-version-513442 in Controller
	  Normal  NodeReady                16s                kubelet          Node old-k8s-version-513442 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 91 30 bc 58 af 08 06
	[Nov24 12:45] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a fb 84 7d 9e 9e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0a 91 30 bc 58 af 08 06
	[ +25.292047] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff da 14 b4 9b 3e 8f 08 06
	[  +0.024207] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 06 8e 71 0b 76 c3 08 06
	[ +16.768103] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 45 b6 ad fe 93 08 06
	[  +5.950770] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e b5 4a 70 0a 35 08 06
	[Nov24 12:46] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e 8b d0 4a da 7e 08 06
	[  +0.000557] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e b5 4a 70 0a 35 08 06
	[  +1.903671] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 1f e8 fc 59 74 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da 14 b4 9b 3e 8f 08 06
	[ +17.535584] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 31 ec 7c 1d 38 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 45 b6 ad fe 93 08 06
	
	
	==> etcd [5793c7fd11b5c568735219e3d193c67360dde88032a438ae332a3e12d7fdf0a5] <==
	{"level":"info","ts":"2025-11-24T13:47:46.896061Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-11-24T13:47:47.18298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-24T13:47:47.183032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-24T13:47:47.183064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-11-24T13:47:47.183082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-11-24T13:47:47.18309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-24T13:47:47.183102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-11-24T13:47:47.183112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-24T13:47:47.184166Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-513442 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T13:47:47.184441Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T13:47:47.184423Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T13:47:47.184639Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T13:47:47.184677Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T13:47:47.184697Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T13:47:47.185356Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T13:47:47.185462Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T13:47:47.185485Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T13:47:47.186127Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-11-24T13:47:47.186272Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-24T13:48:02.673385Z","caller":"traceutil/trace.go:171","msg":"trace[456960560] linearizableReadLoop","detail":"{readStateIndex:331; appliedIndex:330; }","duration":"136.421105ms","start":"2025-11-24T13:48:02.536946Z","end":"2025-11-24T13:48:02.673367Z","steps":["trace[456960560] 'read index received'  (duration: 136.248358ms)","trace[456960560] 'applied index is now lower than readState.Index'  (duration: 171.987µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T13:48:02.673673Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.721804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-11-24T13:48:02.67373Z","caller":"traceutil/trace.go:171","msg":"trace[286257082] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:319; }","duration":"136.809717ms","start":"2025-11-24T13:48:02.536907Z","end":"2025-11-24T13:48:02.673717Z","steps":["trace[286257082] 'agreement among raft nodes before linearized reading'  (duration: 136.690513ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:48:02.673851Z","caller":"traceutil/trace.go:171","msg":"trace[2009156990] transaction","detail":"{read_only:false; response_revision:319; number_of_response:1; }","duration":"168.350659ms","start":"2025-11-24T13:48:02.505481Z","end":"2025-11-24T13:48:02.673832Z","steps":["trace[2009156990] 'process raft request'  (duration: 167.775897ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T13:48:02.673811Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.836489ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T13:48:02.673892Z","caller":"traceutil/trace.go:171","msg":"trace[1422014017] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:319; }","duration":"132.929171ms","start":"2025-11-24T13:48:02.54095Z","end":"2025-11-24T13:48:02.673879Z","steps":["trace[1422014017] 'agreement among raft nodes before linearized reading'  (duration: 132.804065ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:48:36 up  2:30,  0 user,  load average: 2.03, 2.80, 1.92
	Linux old-k8s-version-513442 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1dab1df16e654e8d2bf5248f41d4e61a9922afd9e9aa99eb10b51ff76d83fd27] <==
	I1124 13:48:08.805828       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 13:48:08.806157       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 13:48:08.806325       1 main.go:148] setting mtu 1500 for CNI 
	I1124 13:48:08.806347       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 13:48:08.806366       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T13:48:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 13:48:09.065201       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 13:48:09.065237       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 13:48:09.065250       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 13:48:09.205219       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 13:48:09.465641       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 13:48:09.465667       1 metrics.go:72] Registering metrics
	I1124 13:48:09.465726       1 controller.go:711] "Syncing nftables rules"
	I1124 13:48:19.068504       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:48:19.068576       1 main.go:301] handling current node
	I1124 13:48:29.065440       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:48:29.065473       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b89c098ff2cb630c37cf57f5061688d52a419284b629da3305843a9dee1a5dbb] <==
	I1124 13:47:48.951700       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 13:47:48.951970       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1124 13:47:48.951984       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1124 13:47:48.952108       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1124 13:47:48.952141       1 aggregator.go:166] initial CRD sync complete...
	I1124 13:47:48.952149       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 13:47:48.952156       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 13:47:48.952165       1 cache.go:39] Caches are synced for autoregister controller
	I1124 13:47:48.953986       1 controller.go:624] quota admission added evaluator for: namespaces
	I1124 13:47:49.152644       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 13:47:49.858204       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 13:47:49.862657       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 13:47:49.862682       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 13:47:50.422560       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 13:47:50.472548       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 13:47:50.570004       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 13:47:50.579741       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1124 13:47:50.580884       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 13:47:50.586999       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 13:47:50.885484       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 13:47:51.864040       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 13:47:51.877619       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 13:47:51.890804       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1124 13:48:04.597347       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1124 13:48:04.651565       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [f7663d3953f0ee1aca9b8f557f4e81791e15502a0a6447b494d2035c4c9b2dfc] <==
	I1124 13:48:03.884906       1 shared_informer.go:318] Caches are synced for deployment
	I1124 13:48:03.932363       1 shared_informer.go:318] Caches are synced for resource quota
	I1124 13:48:03.941297       1 shared_informer.go:318] Caches are synced for resource quota
	I1124 13:48:04.243318       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 13:48:04.243355       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 13:48:04.258877       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 13:48:04.607851       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hzfcx"
	I1124 13:48:04.611600       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tpjvb"
	I1124 13:48:04.656277       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1124 13:48:04.748220       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-bcd4m"
	I1124 13:48:04.756616       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-b5rrl"
	I1124 13:48:04.767398       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="111.018323ms"
	I1124 13:48:04.782835       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.361034ms"
	I1124 13:48:04.782967       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.68µs"
	I1124 13:48:04.940856       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1124 13:48:04.951934       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-bcd4m"
	I1124 13:48:04.962829       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.807545ms"
	I1124 13:48:04.970616       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.726674ms"
	I1124 13:48:04.970784       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.42µs"
	I1124 13:48:19.202453       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="121.753µs"
	I1124 13:48:19.220547       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.147µs"
	I1124 13:48:20.044339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="114.847µs"
	I1124 13:48:20.080458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.482374ms"
	I1124 13:48:20.080575       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.63µs"
	I1124 13:48:23.770117       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [0b87cfcc163e379c4e72aa8c64739d9d13a801c140b5fabe7cbbc11022cfd12a] <==
	I1124 13:48:05.277959       1 server_others.go:69] "Using iptables proxy"
	I1124 13:48:05.288147       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1124 13:48:05.312455       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:48:05.315014       1 server_others.go:152] "Using iptables Proxier"
	I1124 13:48:05.315055       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 13:48:05.315064       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 13:48:05.315106       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 13:48:05.315978       1 server.go:846] "Version info" version="v1.28.0"
	I1124 13:48:05.316072       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:48:05.317668       1 config.go:188] "Starting service config controller"
	I1124 13:48:05.317713       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 13:48:05.317754       1 config.go:315] "Starting node config controller"
	I1124 13:48:05.317762       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 13:48:05.318091       1 config.go:97] "Starting endpoint slice config controller"
	I1124 13:48:05.318114       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 13:48:05.418055       1 shared_informer.go:318] Caches are synced for service config
	I1124 13:48:05.418104       1 shared_informer.go:318] Caches are synced for node config
	I1124 13:48:05.419230       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [bdd5c20173350449ff23a9ee9a791fe034c518afc7784448209ad9b0a5c32a9f] <==
	W1124 13:47:49.773882       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1124 13:47:49.773941       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 13:47:49.817194       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1124 13:47:49.817241       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1124 13:47:49.898465       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1124 13:47:49.898514       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1124 13:47:49.973231       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1124 13:47:49.973807       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1124 13:47:49.975515       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1124 13:47:49.975624       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1124 13:47:50.044243       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1124 13:47:50.044284       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1124 13:47:50.065787       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1124 13:47:50.065828       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1124 13:47:50.067051       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1124 13:47:50.067084       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1124 13:47:50.088454       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1124 13:47:50.088492       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1124 13:47:50.094062       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1124 13:47:50.094103       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1124 13:47:50.176377       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1124 13:47:50.176425       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1124 13:47:50.188050       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1124 13:47:50.188094       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1124 13:47:51.410574       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 13:48:03 old-k8s-version-513442 kubelet[1521]: I1124 13:48:03.736815    1521 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 13:48:04 old-k8s-version-513442 kubelet[1521]: I1124 13:48:04.621236    1521 topology_manager.go:215] "Topology Admit Handler" podUID="f4ba208a-1a78-46ae-9684-ff3309400852" podNamespace="kube-system" podName="kube-proxy-hzfcx"
	Nov 24 13:48:04 old-k8s-version-513442 kubelet[1521]: I1124 13:48:04.628198    1521 topology_manager.go:215] "Topology Admit Handler" podUID="c7df115a-8394-4f80-ac6c-5b1fc95337b5" podNamespace="kube-system" podName="kindnet-tpjvb"
	Nov 24 13:48:04 old-k8s-version-513442 kubelet[1521]: I1124 13:48:04.701758    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7df115a-8394-4f80-ac6c-5b1fc95337b5-xtables-lock\") pod \"kindnet-tpjvb\" (UID: \"c7df115a-8394-4f80-ac6c-5b1fc95337b5\") " pod="kube-system/kindnet-tpjvb"
	Nov 24 13:48:04 old-k8s-version-513442 kubelet[1521]: I1124 13:48:04.702003    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cdcx\" (UniqueName: \"kubernetes.io/projected/f4ba208a-1a78-46ae-9684-ff3309400852-kube-api-access-6cdcx\") pod \"kube-proxy-hzfcx\" (UID: \"f4ba208a-1a78-46ae-9684-ff3309400852\") " pod="kube-system/kube-proxy-hzfcx"
	Nov 24 13:48:04 old-k8s-version-513442 kubelet[1521]: I1124 13:48:04.702157    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c7df115a-8394-4f80-ac6c-5b1fc95337b5-cni-cfg\") pod \"kindnet-tpjvb\" (UID: \"c7df115a-8394-4f80-ac6c-5b1fc95337b5\") " pod="kube-system/kindnet-tpjvb"
	Nov 24 13:48:04 old-k8s-version-513442 kubelet[1521]: I1124 13:48:04.702290    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7df115a-8394-4f80-ac6c-5b1fc95337b5-lib-modules\") pod \"kindnet-tpjvb\" (UID: \"c7df115a-8394-4f80-ac6c-5b1fc95337b5\") " pod="kube-system/kindnet-tpjvb"
	Nov 24 13:48:04 old-k8s-version-513442 kubelet[1521]: I1124 13:48:04.702379    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnddq\" (UniqueName: \"kubernetes.io/projected/c7df115a-8394-4f80-ac6c-5b1fc95337b5-kube-api-access-cnddq\") pod \"kindnet-tpjvb\" (UID: \"c7df115a-8394-4f80-ac6c-5b1fc95337b5\") " pod="kube-system/kindnet-tpjvb"
	Nov 24 13:48:04 old-k8s-version-513442 kubelet[1521]: I1124 13:48:04.702452    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f4ba208a-1a78-46ae-9684-ff3309400852-kube-proxy\") pod \"kube-proxy-hzfcx\" (UID: \"f4ba208a-1a78-46ae-9684-ff3309400852\") " pod="kube-system/kube-proxy-hzfcx"
	Nov 24 13:48:04 old-k8s-version-513442 kubelet[1521]: I1124 13:48:04.702483    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4ba208a-1a78-46ae-9684-ff3309400852-xtables-lock\") pod \"kube-proxy-hzfcx\" (UID: \"f4ba208a-1a78-46ae-9684-ff3309400852\") " pod="kube-system/kube-proxy-hzfcx"
	Nov 24 13:48:04 old-k8s-version-513442 kubelet[1521]: I1124 13:48:04.702513    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4ba208a-1a78-46ae-9684-ff3309400852-lib-modules\") pod \"kube-proxy-hzfcx\" (UID: \"f4ba208a-1a78-46ae-9684-ff3309400852\") " pod="kube-system/kube-proxy-hzfcx"
	Nov 24 13:48:06 old-k8s-version-513442 kubelet[1521]: I1124 13:48:06.009542    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hzfcx" podStartSLOduration=2.00948849 podCreationTimestamp="2025-11-24 13:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:48:06.009255456 +0000 UTC m=+14.175181609" watchObservedRunningTime="2025-11-24 13:48:06.00948849 +0000 UTC m=+14.175414641"
	Nov 24 13:48:09 old-k8s-version-513442 kubelet[1521]: I1124 13:48:09.017801    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-tpjvb" podStartSLOduration=2.028995374 podCreationTimestamp="2025-11-24 13:48:04 +0000 UTC" firstStartedPulling="2025-11-24 13:48:05.423030434 +0000 UTC m=+13.588956573" lastFinishedPulling="2025-11-24 13:48:08.411777827 +0000 UTC m=+16.577703968" observedRunningTime="2025-11-24 13:48:09.017454231 +0000 UTC m=+17.183380385" watchObservedRunningTime="2025-11-24 13:48:09.017742769 +0000 UTC m=+17.183668923"
	Nov 24 13:48:19 old-k8s-version-513442 kubelet[1521]: I1124 13:48:19.126026    1521 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 24 13:48:19 old-k8s-version-513442 kubelet[1521]: I1124 13:48:19.199313    1521 topology_manager.go:215] "Topology Admit Handler" podUID="65efb270-100a-4e7c-bee8-24de1df28586" podNamespace="kube-system" podName="storage-provisioner"
	Nov 24 13:48:19 old-k8s-version-513442 kubelet[1521]: I1124 13:48:19.202110    1521 topology_manager.go:215] "Topology Admit Handler" podUID="4e6c9b7c-5f0a-4c60-8197-20e985a07403" podNamespace="kube-system" podName="coredns-5dd5756b68-b5rrl"
	Nov 24 13:48:19 old-k8s-version-513442 kubelet[1521]: I1124 13:48:19.296963    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84ccn\" (UniqueName: \"kubernetes.io/projected/65efb270-100a-4e7c-bee8-24de1df28586-kube-api-access-84ccn\") pod \"storage-provisioner\" (UID: \"65efb270-100a-4e7c-bee8-24de1df28586\") " pod="kube-system/storage-provisioner"
	Nov 24 13:48:19 old-k8s-version-513442 kubelet[1521]: I1124 13:48:19.297219    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/65efb270-100a-4e7c-bee8-24de1df28586-tmp\") pod \"storage-provisioner\" (UID: \"65efb270-100a-4e7c-bee8-24de1df28586\") " pod="kube-system/storage-provisioner"
	Nov 24 13:48:19 old-k8s-version-513442 kubelet[1521]: I1124 13:48:19.297296    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj4xm\" (UniqueName: \"kubernetes.io/projected/4e6c9b7c-5f0a-4c60-8197-20e985a07403-kube-api-access-sj4xm\") pod \"coredns-5dd5756b68-b5rrl\" (UID: \"4e6c9b7c-5f0a-4c60-8197-20e985a07403\") " pod="kube-system/coredns-5dd5756b68-b5rrl"
	Nov 24 13:48:19 old-k8s-version-513442 kubelet[1521]: I1124 13:48:19.297327    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e6c9b7c-5f0a-4c60-8197-20e985a07403-config-volume\") pod \"coredns-5dd5756b68-b5rrl\" (UID: \"4e6c9b7c-5f0a-4c60-8197-20e985a07403\") " pod="kube-system/coredns-5dd5756b68-b5rrl"
	Nov 24 13:48:20 old-k8s-version-513442 kubelet[1521]: I1124 13:48:20.055454    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-b5rrl" podStartSLOduration=16.055384325 podCreationTimestamp="2025-11-24 13:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:48:20.043996165 +0000 UTC m=+28.209922315" watchObservedRunningTime="2025-11-24 13:48:20.055384325 +0000 UTC m=+28.221310494"
	Nov 24 13:48:20 old-k8s-version-513442 kubelet[1521]: I1124 13:48:20.072835    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.072769008 podCreationTimestamp="2025-11-24 13:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:48:20.05633827 +0000 UTC m=+28.222264421" watchObservedRunningTime="2025-11-24 13:48:20.072769008 +0000 UTC m=+28.238695171"
	Nov 24 13:48:22 old-k8s-version-513442 kubelet[1521]: I1124 13:48:22.349894    1521 topology_manager.go:215] "Topology Admit Handler" podUID="e21ee73b-578f-48c9-826d-ab3b4bbb7871" podNamespace="default" podName="busybox"
	Nov 24 13:48:22 old-k8s-version-513442 kubelet[1521]: I1124 13:48:22.417169    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmgg8\" (UniqueName: \"kubernetes.io/projected/e21ee73b-578f-48c9-826d-ab3b4bbb7871-kube-api-access-mmgg8\") pod \"busybox\" (UID: \"e21ee73b-578f-48c9-826d-ab3b4bbb7871\") " pod="default/busybox"
	Nov 24 13:48:26 old-k8s-version-513442 kubelet[1521]: I1124 13:48:26.061183    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.287793929 podCreationTimestamp="2025-11-24 13:48:22 +0000 UTC" firstStartedPulling="2025-11-24 13:48:22.783005961 +0000 UTC m=+30.948932098" lastFinishedPulling="2025-11-24 13:48:25.556333595 +0000 UTC m=+33.722259740" observedRunningTime="2025-11-24 13:48:26.061015161 +0000 UTC m=+34.226941311" watchObservedRunningTime="2025-11-24 13:48:26.061121571 +0000 UTC m=+34.227047722"
	
	
	==> storage-provisioner [c9c8f51adb6bbca8e0f954ad9082c0c66235dce129e152dd682ab69622b44aac] <==
	I1124 13:48:19.713946       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 13:48:19.725060       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 13:48:19.725122       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 13:48:19.732798       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 13:48:19.733028       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-513442_df294b40-30a6-4b8c-83ff-3d897f2504d8!
	I1124 13:48:19.733030       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"938f90ea-7103-4290-984c-f5e7c1aae849", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-513442_df294b40-30a6-4b8c-83ff-3d897f2504d8 became leader
	I1124 13:48:19.833675       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-513442_df294b40-30a6-4b8c-83ff-3d897f2504d8!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-513442 -n old-k8s-version-513442
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-513442 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (14.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (13.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-608395 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e09b20ec-b541-4478-9c67-c55b56ae8991] Pending
helpers_test.go:352: "busybox" [e09b20ec-b541-4478-9c67-c55b56ae8991] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e09b20ec-b541-4478-9c67-c55b56ae8991] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003168887s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-608395 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-608395
helpers_test.go:243: (dbg) docker inspect no-preload-608395:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a2cfa332a8b5a7653329ee2f376e65aae38a42fb563cebe264c8be1149451517",
	        "Created": "2025-11-24T13:47:36.064034647Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 610011,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:47:36.107803041Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/a2cfa332a8b5a7653329ee2f376e65aae38a42fb563cebe264c8be1149451517/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a2cfa332a8b5a7653329ee2f376e65aae38a42fb563cebe264c8be1149451517/hostname",
	        "HostsPath": "/var/lib/docker/containers/a2cfa332a8b5a7653329ee2f376e65aae38a42fb563cebe264c8be1149451517/hosts",
	        "LogPath": "/var/lib/docker/containers/a2cfa332a8b5a7653329ee2f376e65aae38a42fb563cebe264c8be1149451517/a2cfa332a8b5a7653329ee2f376e65aae38a42fb563cebe264c8be1149451517-json.log",
	        "Name": "/no-preload-608395",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-608395:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-608395",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a2cfa332a8b5a7653329ee2f376e65aae38a42fb563cebe264c8be1149451517",
	                "LowerDir": "/var/lib/docker/overlay2/07db3b81c9ae03654a1edfc8ae28fb3d1574335a879cbcd8db0ec3d1b8c2b022-init/diff:/var/lib/docker/overlay2/0f013e03fd0eaee4efc608fb0376e7d3e8ba628388f5191310c2259ab273ad26/diff",
	                "MergedDir": "/var/lib/docker/overlay2/07db3b81c9ae03654a1edfc8ae28fb3d1574335a879cbcd8db0ec3d1b8c2b022/merged",
	                "UpperDir": "/var/lib/docker/overlay2/07db3b81c9ae03654a1edfc8ae28fb3d1574335a879cbcd8db0ec3d1b8c2b022/diff",
	                "WorkDir": "/var/lib/docker/overlay2/07db3b81c9ae03654a1edfc8ae28fb3d1574335a879cbcd8db0ec3d1b8c2b022/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-608395",
	                "Source": "/var/lib/docker/volumes/no-preload-608395/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-608395",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-608395",
	                "name.minikube.sigs.k8s.io": "no-preload-608395",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "35e6c740b9266e02a48be0cb2494d2f8cd35e6377b15b9409b954948115a5bee",
	            "SandboxKey": "/var/run/docker/netns/35e6c740b926",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-608395": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "85e2905f6131e6f4ab94166eee446126fc1d6139a5452c9dd9a7c77abe756db0",
	                    "EndpointID": "ca65671436ff405263c2edcb381a8d49767e507c49f609ebdb40212efcfa2c6b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "ce:1e:b1:5e:7d:83",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-608395",
	                        "a2cfa332a8b5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-608395 -n no-preload-608395
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-608395 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-608395 logs -n 25: (1.311190951s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-355661 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ ssh     │ -p cilium-355661 sudo containerd config dump                                                                                                                                                                                                        │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ ssh     │ -p cilium-355661 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ ssh     │ -p cilium-355661 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ start   │ -p NoKubernetes-787855 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                         │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │ 24 Nov 25 13:46 UTC │
	│ ssh     │ -p cilium-355661 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ ssh     │ -p cilium-355661 sudo crio config                                                                                                                                                                                                                   │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ delete  │ -p cilium-355661                                                                                                                                                                                                                                    │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │ 24 Nov 25 13:46 UTC │
	│ start   │ -p force-systemd-flag-775412 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-775412 │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │ 24 Nov 25 13:47 UTC │
	│ delete  │ -p NoKubernetes-787855                                                                                                                                                                                                                              │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │ 24 Nov 25 13:46 UTC │
	│ start   │ -p NoKubernetes-787855 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                         │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │ 24 Nov 25 13:47 UTC │
	│ ssh     │ force-systemd-flag-775412 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-775412 │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ delete  │ -p force-systemd-flag-775412                                                                                                                                                                                                                        │ force-systemd-flag-775412 │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ ssh     │ -p NoKubernetes-787855 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │                     │
	│ start   │ -p cert-options-342221 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-342221       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ stop    │ -p NoKubernetes-787855                                                                                                                                                                                                                              │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ start   │ -p NoKubernetes-787855 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ ssh     │ cert-options-342221 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-342221       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ ssh     │ -p cert-options-342221 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-342221       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ delete  │ -p cert-options-342221                                                                                                                                                                                                                              │ cert-options-342221       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ start   │ -p old-k8s-version-513442 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-513442    │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:48 UTC │
	│ ssh     │ -p NoKubernetes-787855 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │                     │
	│ delete  │ -p NoKubernetes-787855                                                                                                                                                                                                                              │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ start   │ -p no-preload-608395 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-608395         │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:48 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-513442 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-513442    │ jenkins │ v1.37.0 │ 24 Nov 25 13:48 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:47:35
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:47:35.072446  608917 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:47:35.072749  608917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:47:35.072763  608917 out.go:374] Setting ErrFile to fd 2...
	I1124 13:47:35.072768  608917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:47:35.073046  608917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
	I1124 13:47:35.073526  608917 out.go:368] Setting JSON to false
	I1124 13:47:35.074857  608917 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8994,"bootTime":1763983061,"procs":340,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:47:35.074959  608917 start.go:143] virtualization: kvm guest
	I1124 13:47:35.077490  608917 out.go:179] * [no-preload-608395] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:47:35.079255  608917 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:47:35.079255  608917 notify.go:221] Checking for updates...
	I1124 13:47:35.080776  608917 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:47:35.082396  608917 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:47:35.083932  608917 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-370498/.minikube
	I1124 13:47:35.085251  608917 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:47:35.086603  608917 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:47:35.089427  608917 config.go:182] Loaded profile config "cert-expiration-099863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:47:35.089575  608917 config.go:182] Loaded profile config "kubernetes-upgrade-358357": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:47:35.089706  608917 config.go:182] Loaded profile config "old-k8s-version-513442": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 13:47:35.089837  608917 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:47:35.114581  608917 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:47:35.114769  608917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:47:35.180508  608917 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:58 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 13:47:35.169616068 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:47:35.180627  608917 docker.go:319] overlay module found
	I1124 13:47:35.182258  608917 out.go:179] * Using the docker driver based on user configuration
	I1124 13:47:35.183642  608917 start.go:309] selected driver: docker
	I1124 13:47:35.183663  608917 start.go:927] validating driver "docker" against <nil>
	I1124 13:47:35.183675  608917 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:47:35.184437  608917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:47:35.249663  608917 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:58 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 13:47:35.237755455 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:47:35.249975  608917 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:47:35.250402  608917 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:47:35.252318  608917 out.go:179] * Using Docker driver with root privileges
	I1124 13:47:35.254354  608917 cni.go:84] Creating CNI manager for ""
	I1124 13:47:35.254446  608917 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:47:35.254457  608917 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:47:35.254652  608917 start.go:353] cluster config:
	{Name:no-preload-608395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-608395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:47:35.256201  608917 out.go:179] * Starting "no-preload-608395" primary control-plane node in "no-preload-608395" cluster
	I1124 13:47:35.257392  608917 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 13:47:35.258857  608917 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:47:35.260330  608917 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 13:47:35.260404  608917 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:47:35.260496  608917 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/config.json ...
	I1124 13:47:35.260537  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/config.json: {Name:mk2f4d5eff7070dcec35f39f30e01cd0b3fcce8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:35.260546  608917 cache.go:107] acquiring lock: {Name:mk28ec677a69a6f418643b8b89331fa25b8c42f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260546  608917 cache.go:107] acquiring lock: {Name:mkad3cbb6fa2e7f41e4d7c0e1e3c74156dc55521 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260557  608917 cache.go:107] acquiring lock: {Name:mk7aef7fc4ff6e4e4541fdeb1d5e26c13a66856b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260584  608917 cache.go:107] acquiring lock: {Name:mk586ecbe7f4b4aab48f8ad28d0d7b1848898c9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260604  608917 cache.go:107] acquiring lock: {Name:mkf548ea8c9721a4e4ad1e37073c3deea8530810 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260622  608917 cache.go:107] acquiring lock: {Name:mk1ce266bd6b9003a6a371facbc84809dce0c3c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260651  608917 cache.go:107] acquiring lock: {Name:mk687b2dcc146d43e9d607f472f2f08a2307baed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260663  608917 cache.go:107] acquiring lock: {Name:mk4b559f0fdae6e96edea26981618bf8d9d50b2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260712  608917 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:35.260755  608917 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:35.260801  608917 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:35.260819  608917 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:35.260852  608917 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:35.260858  608917 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 13:47:35.260727  608917 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:35.261039  608917 cache.go:115] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 13:47:35.261050  608917 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 523.955µs
	I1124 13:47:35.261069  608917 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 13:47:35.262249  608917 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:35.262277  608917 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:35.262359  608917 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:35.262407  608917 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 13:47:35.262461  608917 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:35.262522  608917 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:35.262735  608917 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:35.285963  608917 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 13:47:35.285989  608917 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 13:47:35.286014  608917 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:47:35.286057  608917 start.go:360] acquireMachinesLock for no-preload-608395: {Name:mkc9d1cf0cec9be2b369f1e47c690fc0399e88e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.286191  608917 start.go:364] duration metric: took 102.178µs to acquireMachinesLock for "no-preload-608395"
	I1124 13:47:35.286224  608917 start.go:93] Provisioning new machine with config: &{Name:no-preload-608395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-608395 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 13:47:35.286330  608917 start.go:125] createHost starting for "" (driver="docker")
	I1124 13:47:30.558317  607669 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 13:47:30.558626  607669 start.go:159] libmachine.API.Create for "old-k8s-version-513442" (driver="docker")
	I1124 13:47:30.558656  607669 client.go:173] LocalClient.Create starting
	I1124 13:47:30.558725  607669 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem
	I1124 13:47:30.558754  607669 main.go:143] libmachine: Decoding PEM data...
	I1124 13:47:30.558772  607669 main.go:143] libmachine: Parsing certificate...
	I1124 13:47:30.558826  607669 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem
	I1124 13:47:30.558849  607669 main.go:143] libmachine: Decoding PEM data...
	I1124 13:47:30.558860  607669 main.go:143] libmachine: Parsing certificate...
	I1124 13:47:30.559212  607669 cli_runner.go:164] Run: docker network inspect old-k8s-version-513442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 13:47:30.577139  607669 cli_runner.go:211] docker network inspect old-k8s-version-513442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 13:47:30.577245  607669 network_create.go:284] running [docker network inspect old-k8s-version-513442] to gather additional debugging logs...
	I1124 13:47:30.577276  607669 cli_runner.go:164] Run: docker network inspect old-k8s-version-513442
	W1124 13:47:30.593786  607669 cli_runner.go:211] docker network inspect old-k8s-version-513442 returned with exit code 1
	I1124 13:47:30.593826  607669 network_create.go:287] error running [docker network inspect old-k8s-version-513442]: docker network inspect old-k8s-version-513442: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-513442 not found
	I1124 13:47:30.593854  607669 network_create.go:289] output of [docker network inspect old-k8s-version-513442]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-513442 not found
	
	** /stderr **
	I1124 13:47:30.594026  607669 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:47:30.613315  607669 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8afb578efdfa IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:5e:46:43:aa:fe} reservation:<nil>}
	I1124 13:47:30.614364  607669 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ca3a55f53176 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:98:62:4c:91:8f} reservation:<nil>}
	I1124 13:47:30.614827  607669 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e11236ccf9ba IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:36:3b:80:be:95:34} reservation:<nil>}
	I1124 13:47:30.615410  607669 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-35b7bf6fd97a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5a:12:4e:d4:19:26} reservation:<nil>}
	I1124 13:47:30.616018  607669 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-1f5932eecbe7 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:aa:ff:d3:cd:de:0f} reservation:<nil>}
	I1124 13:47:30.617269  607669 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e7fa00}
	I1124 13:47:30.617308  607669 network_create.go:124] attempt to create docker network old-k8s-version-513442 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1124 13:47:30.617398  607669 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-513442 old-k8s-version-513442
	I1124 13:47:30.671102  607669 network_create.go:108] docker network old-k8s-version-513442 192.168.94.0/24 created
	I1124 13:47:30.671150  607669 kic.go:121] calculated static IP "192.168.94.2" for the "old-k8s-version-513442" container
	I1124 13:47:30.671218  607669 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 13:47:30.689078  607669 cli_runner.go:164] Run: docker volume create old-k8s-version-513442 --label name.minikube.sigs.k8s.io=old-k8s-version-513442 --label created_by.minikube.sigs.k8s.io=true
	I1124 13:47:30.709312  607669 oci.go:103] Successfully created a docker volume old-k8s-version-513442
	I1124 13:47:30.709408  607669 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-513442-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-513442 --entrypoint /usr/bin/test -v old-k8s-version-513442:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 13:47:31.132905  607669 oci.go:107] Successfully prepared a docker volume old-k8s-version-513442
	I1124 13:47:31.132980  607669 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 13:47:31.132992  607669 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 13:47:31.133075  607669 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-513442:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 13:47:35.011677  607669 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-513442:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (3.878547269s)
	I1124 13:47:35.011716  607669 kic.go:203] duration metric: took 3.878721361s to extract preloaded images to volume ...
	W1124 13:47:35.011796  607669 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 13:47:35.011829  607669 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 13:47:35.011871  607669 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:47:35.073961  607669 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-513442 --name old-k8s-version-513442 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-513442 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-513442 --network old-k8s-version-513442 --ip 192.168.94.2 --volume old-k8s-version-513442:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:47:32.801968  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:47:32.802485  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:47:32.802542  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:47:32.802595  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:47:32.832902  572647 cri.go:89] found id: "6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:32.832956  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:32.832963  572647 cri.go:89] found id: ""
	I1124 13:47:32.832972  572647 logs.go:282] 2 containers: [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:47:32.833038  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:32.837621  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:32.841927  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:47:32.842013  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:47:32.877193  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:32.877214  572647 cri.go:89] found id: ""
	I1124 13:47:32.877223  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:47:32.877290  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:32.882239  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:47:32.882329  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:47:32.912677  572647 cri.go:89] found id: ""
	I1124 13:47:32.912709  572647 logs.go:282] 0 containers: []
	W1124 13:47:32.912727  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:47:32.912735  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:47:32.912799  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:47:32.942634  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:32.942656  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:32.942662  572647 cri.go:89] found id: ""
	I1124 13:47:32.942672  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:47:32.942735  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:32.947427  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:32.951442  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:47:32.951519  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:47:32.982583  572647 cri.go:89] found id: ""
	I1124 13:47:32.982614  572647 logs.go:282] 0 containers: []
	W1124 13:47:32.982626  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:47:32.982635  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:47:32.982706  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:47:33.013412  572647 cri.go:89] found id: "daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:33.013432  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:33.013435  572647 cri.go:89] found id: ""
	I1124 13:47:33.013444  572647 logs.go:282] 2 containers: [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:47:33.013492  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:33.017848  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:33.021955  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:47:33.022038  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:47:33.055691  572647 cri.go:89] found id: ""
	I1124 13:47:33.055722  572647 logs.go:282] 0 containers: []
	W1124 13:47:33.055733  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:47:33.055743  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:47:33.055822  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:47:33.086844  572647 cri.go:89] found id: ""
	I1124 13:47:33.086868  572647 logs.go:282] 0 containers: []
	W1124 13:47:33.086877  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:47:33.086887  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:47:33.086904  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:33.140737  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:47:33.140775  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:33.185221  572647 logs.go:123] Gathering logs for kube-controller-manager [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf] ...
	I1124 13:47:33.185259  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:33.218642  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:47:33.218669  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:47:33.251506  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:47:33.251634  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:47:33.346627  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:47:33.346672  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:47:33.363530  572647 logs.go:123] Gathering logs for kube-apiserver [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8] ...
	I1124 13:47:33.363571  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:33.400997  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:47:33.401042  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:33.446051  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:47:33.446088  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:33.484418  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:47:33.484465  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:47:33.537056  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:47:33.537093  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:47:33.611727  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:47:33.611762  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:47:33.611778  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:36.150015  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:47:36.150435  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:47:36.150499  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:47:36.150559  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:47:36.181496  572647 cri.go:89] found id: "6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:36.181524  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:36.181530  572647 cri.go:89] found id: ""
	I1124 13:47:36.181541  572647 logs.go:282] 2 containers: [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:47:36.181626  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.186587  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.190995  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:47:36.191076  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:47:35.288531  608917 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 13:47:35.288826  608917 start.go:159] libmachine.API.Create for "no-preload-608395" (driver="docker")
	I1124 13:47:35.288879  608917 client.go:173] LocalClient.Create starting
	I1124 13:47:35.288981  608917 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem
	I1124 13:47:35.289027  608917 main.go:143] libmachine: Decoding PEM data...
	I1124 13:47:35.289053  608917 main.go:143] libmachine: Parsing certificate...
	I1124 13:47:35.289129  608917 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem
	I1124 13:47:35.289159  608917 main.go:143] libmachine: Decoding PEM data...
	I1124 13:47:35.289172  608917 main.go:143] libmachine: Parsing certificate...
	I1124 13:47:35.289667  608917 cli_runner.go:164] Run: docker network inspect no-preload-608395 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 13:47:35.309178  608917 cli_runner.go:211] docker network inspect no-preload-608395 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 13:47:35.309257  608917 network_create.go:284] running [docker network inspect no-preload-608395] to gather additional debugging logs...
	I1124 13:47:35.309283  608917 cli_runner.go:164] Run: docker network inspect no-preload-608395
	W1124 13:47:35.328323  608917 cli_runner.go:211] docker network inspect no-preload-608395 returned with exit code 1
	I1124 13:47:35.328350  608917 network_create.go:287] error running [docker network inspect no-preload-608395]: docker network inspect no-preload-608395: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-608395 not found
	I1124 13:47:35.328362  608917 network_create.go:289] output of [docker network inspect no-preload-608395]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-608395 not found
	
	** /stderr **
	I1124 13:47:35.328448  608917 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:47:35.351281  608917 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8afb578efdfa IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:5e:46:43:aa:fe} reservation:<nil>}
	I1124 13:47:35.352105  608917 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ca3a55f53176 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:98:62:4c:91:8f} reservation:<nil>}
	I1124 13:47:35.352583  608917 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e11236ccf9ba IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:36:3b:80:be:95:34} reservation:<nil>}
	I1124 13:47:35.353066  608917 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-35b7bf6fd97a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5a:12:4e:d4:19:26} reservation:<nil>}
	I1124 13:47:35.353566  608917 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-1f5932eecbe7 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:aa:ff:d3:cd:de:0f} reservation:<nil>}
	I1124 13:47:35.354145  608917 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-57f535f2d59b IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:6e:28:a9:1e:8a:96} reservation:<nil>}
	I1124 13:47:35.354775  608917 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d86bc0}
	I1124 13:47:35.354805  608917 network_create.go:124] attempt to create docker network no-preload-608395 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1124 13:47:35.354861  608917 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-608395 no-preload-608395
	I1124 13:47:35.432539  608917 network_create.go:108] docker network no-preload-608395 192.168.103.0/24 created
	I1124 13:47:35.432598  608917 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-608395" container
	I1124 13:47:35.432695  608917 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 13:47:35.453593  608917 cli_runner.go:164] Run: docker volume create no-preload-608395 --label name.minikube.sigs.k8s.io=no-preload-608395 --label created_by.minikube.sigs.k8s.io=true
	I1124 13:47:35.471825  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1124 13:47:35.475329  608917 oci.go:103] Successfully created a docker volume no-preload-608395
	I1124 13:47:35.475418  608917 cli_runner.go:164] Run: docker run --rm --name no-preload-608395-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-608395 --entrypoint /usr/bin/test -v no-preload-608395:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 13:47:35.484374  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1124 13:47:35.522730  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1124 13:47:35.528813  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1124 13:47:35.529239  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1124 13:47:35.541677  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1124 13:47:35.561542  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1124 13:47:35.640840  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1124 13:47:35.640868  608917 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 380.250244ms
	I1124 13:47:35.640883  608917 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 13:47:35.985260  608917 oci.go:107] Successfully prepared a docker volume no-preload-608395
	I1124 13:47:35.985319  608917 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	W1124 13:47:35.985414  608917 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 13:47:35.985453  608917 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 13:47:35.985506  608917 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:47:36.047047  608917 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-608395 --name no-preload-608395 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-608395 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-608395 --network no-preload-608395 --ip 192.168.103.2 --volume no-preload-608395:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:47:36.258467  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1124 13:47:36.258503  608917 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 997.955969ms
	I1124 13:47:36.258519  608917 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1124 13:47:36.410125  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Running}}
	I1124 13:47:36.432289  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:47:36.453312  608917 cli_runner.go:164] Run: docker exec no-preload-608395 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:47:36.504193  608917 oci.go:144] the created container "no-preload-608395" has a running status.
	I1124 13:47:36.504226  608917 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa...
	I1124 13:47:36.604837  608917 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:47:36.631267  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:47:36.655799  608917 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:47:36.655830  608917 kic_runner.go:114] Args: [docker exec --privileged no-preload-608395 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:47:36.705661  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:47:36.729778  608917 machine.go:94] provisionDockerMachine start ...
	I1124 13:47:36.729884  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:36.756901  608917 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:36.757367  608917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1124 13:47:36.757380  608917 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:47:36.758446  608917 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 13:47:37.510037  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1124 13:47:37.510068  608917 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 2.249448579s
	I1124 13:47:37.510081  608917 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1124 13:47:37.572176  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1124 13:47:37.572211  608917 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 2.31168357s
	I1124 13:47:37.572229  608917 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1124 13:47:37.595833  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1124 13:47:37.595868  608917 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 2.335217312s
	I1124 13:47:37.595886  608917 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1124 13:47:37.719899  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1124 13:47:37.719956  608917 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 2.45935214s
	I1124 13:47:37.719969  608917 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1124 13:47:38.059972  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1124 13:47:38.060022  608917 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.799433794s
	I1124 13:47:38.060036  608917 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1124 13:47:38.060055  608917 cache.go:87] Successfully saved all images to host disk.
	I1124 13:47:39.915534  608917 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-608395
	
	I1124 13:47:39.915567  608917 ubuntu.go:182] provisioning hostname "no-preload-608395"
	I1124 13:47:39.915651  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:39.936421  608917 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:39.936658  608917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1124 13:47:39.936672  608917 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-608395 && echo "no-preload-608395" | sudo tee /etc/hostname
	I1124 13:47:35.415632  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Running}}
	I1124 13:47:35.436407  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:47:35.457824  607669 cli_runner.go:164] Run: docker exec old-k8s-version-513442 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:47:35.505936  607669 oci.go:144] the created container "old-k8s-version-513442" has a running status.
	I1124 13:47:35.505993  607669 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa...
	I1124 13:47:35.536159  607669 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:47:35.565751  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:47:35.587350  607669 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:47:35.587376  607669 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-513442 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:47:35.639485  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:47:35.659275  607669 machine.go:94] provisionDockerMachine start ...
	I1124 13:47:35.659377  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:35.682791  607669 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:35.683193  607669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1124 13:47:35.683215  607669 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:47:35.683887  607669 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57402->127.0.0.1:33435: read: connection reset by peer
	I1124 13:47:38.829345  607669 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-513442
	
	I1124 13:47:38.829376  607669 ubuntu.go:182] provisioning hostname "old-k8s-version-513442"
	I1124 13:47:38.829451  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:38.847276  607669 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:38.847521  607669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1124 13:47:38.847540  607669 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-513442 && echo "old-k8s-version-513442" | sudo tee /etc/hostname
	I1124 13:47:39.005190  607669 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-513442
	
	I1124 13:47:39.005277  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.023623  607669 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:39.023848  607669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1124 13:47:39.023866  607669 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-513442' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-513442/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-513442' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:47:39.170228  607669 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:47:39.170266  607669 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-370498/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-370498/.minikube}
	I1124 13:47:39.170286  607669 ubuntu.go:190] setting up certificates
	I1124 13:47:39.170295  607669 provision.go:84] configureAuth start
	I1124 13:47:39.170348  607669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-513442
	I1124 13:47:39.189446  607669 provision.go:143] copyHostCerts
	I1124 13:47:39.189521  607669 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem, removing ...
	I1124 13:47:39.189536  607669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem
	I1124 13:47:39.189619  607669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem (1082 bytes)
	I1124 13:47:39.189751  607669 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem, removing ...
	I1124 13:47:39.189764  607669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem
	I1124 13:47:39.189810  607669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem (1123 bytes)
	I1124 13:47:39.189989  607669 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem, removing ...
	I1124 13:47:39.190006  607669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem
	I1124 13:47:39.190054  607669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem (1675 bytes)
	I1124 13:47:39.190154  607669 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-513442 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-513442]
	I1124 13:47:39.227079  607669 provision.go:177] copyRemoteCerts
	I1124 13:47:39.227139  607669 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:47:39.227177  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.244951  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:47:39.349311  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 13:47:39.371319  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 13:47:39.391311  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 13:47:39.411071  607669 provision.go:87] duration metric: took 240.75737ms to configureAuth
	I1124 13:47:39.411102  607669 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:47:39.411303  607669 config.go:182] Loaded profile config "old-k8s-version-513442": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 13:47:39.411317  607669 machine.go:97] duration metric: took 3.752022568s to provisionDockerMachine
	I1124 13:47:39.411325  607669 client.go:176] duration metric: took 8.852661553s to LocalClient.Create
	I1124 13:47:39.411358  607669 start.go:167] duration metric: took 8.852720089s to libmachine.API.Create "old-k8s-version-513442"
	I1124 13:47:39.411372  607669 start.go:293] postStartSetup for "old-k8s-version-513442" (driver="docker")
	I1124 13:47:39.411388  607669 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:47:39.411452  607669 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:47:39.411508  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.429085  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:47:39.536320  607669 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:47:39.540367  607669 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:47:39.540402  607669 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:47:39.540414  607669 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/addons for local assets ...
	I1124 13:47:39.540470  607669 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/files for local assets ...
	I1124 13:47:39.540543  607669 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem -> 3741222.pem in /etc/ssl/certs
	I1124 13:47:39.540631  607669 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 13:47:39.549275  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:47:39.573695  607669 start.go:296] duration metric: took 162.301306ms for postStartSetup
	I1124 13:47:39.574191  607669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-513442
	I1124 13:47:39.593438  607669 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/config.json ...
	I1124 13:47:39.593801  607669 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:47:39.593897  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.615008  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:47:39.717288  607669 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:47:39.722340  607669 start.go:128] duration metric: took 9.166080327s to createHost
	I1124 13:47:39.722370  607669 start.go:83] releasing machines lock for "old-k8s-version-513442", held for 9.166275546s
	I1124 13:47:39.722447  607669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-513442
	I1124 13:47:39.743680  607669 ssh_runner.go:195] Run: cat /version.json
	I1124 13:47:39.743731  607669 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:47:39.743745  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.743812  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.763336  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:47:39.763737  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:47:39.929805  607669 ssh_runner.go:195] Run: systemctl --version
	I1124 13:47:39.938447  607669 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:47:39.944068  607669 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:47:39.944147  607669 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:47:39.974609  607669 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 13:47:39.974641  607669 start.go:496] detecting cgroup driver to use...
	I1124 13:47:39.974679  607669 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 13:47:39.974728  607669 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 13:47:39.990824  607669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 13:47:40.004856  607669 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:47:40.004920  607669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:47:40.024248  607669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:47:40.044433  607669 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:47:40.145638  607669 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:47:40.247759  607669 docker.go:234] disabling docker service ...
	I1124 13:47:40.247829  607669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:47:40.269922  607669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:47:40.284840  607669 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:47:40.379978  607669 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:47:40.471616  607669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:47:40.485207  607669 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:47:40.501980  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1124 13:47:40.513545  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 13:47:40.524134  607669 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 13:47:40.524215  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 13:47:40.533927  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:47:40.543474  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 13:47:40.553177  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:47:40.563129  607669 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:47:40.572813  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 13:47:40.583799  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 13:47:40.593872  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 13:47:40.604166  607669 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:47:40.612262  607669 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:47:40.620472  607669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:47:40.706065  607669 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 13:47:40.809269  607669 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 13:47:40.809335  607669 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 13:47:40.814110  607669 start.go:564] Will wait 60s for crictl version
	I1124 13:47:40.814187  607669 ssh_runner.go:195] Run: which crictl
	I1124 13:47:40.818745  607669 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:47:40.843808  607669 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 13:47:40.843877  607669 ssh_runner.go:195] Run: containerd --version
	I1124 13:47:40.865477  607669 ssh_runner.go:195] Run: containerd --version
	I1124 13:47:40.893673  607669 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1124 13:47:36.234464  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:36.234492  572647 cri.go:89] found id: ""
	I1124 13:47:36.234504  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:47:36.234584  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.240249  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:47:36.240335  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:47:36.279967  572647 cri.go:89] found id: ""
	I1124 13:47:36.279998  572647 logs.go:282] 0 containers: []
	W1124 13:47:36.280009  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:47:36.280027  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:47:36.280082  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:47:36.313257  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:36.313286  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:36.313292  572647 cri.go:89] found id: ""
	I1124 13:47:36.313302  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:47:36.313364  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.317818  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.322103  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:47:36.322170  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:47:36.352450  572647 cri.go:89] found id: ""
	I1124 13:47:36.352485  572647 logs.go:282] 0 containers: []
	W1124 13:47:36.352497  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:47:36.352506  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:47:36.352569  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:47:36.381849  572647 cri.go:89] found id: "daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:36.381876  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:36.381881  572647 cri.go:89] found id: ""
	I1124 13:47:36.381896  572647 logs.go:282] 2 containers: [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:47:36.381995  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.386540  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.391244  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:47:36.391326  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:47:36.425813  572647 cri.go:89] found id: ""
	I1124 13:47:36.425845  572647 logs.go:282] 0 containers: []
	W1124 13:47:36.425856  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:47:36.425864  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:47:36.425945  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:47:36.461097  572647 cri.go:89] found id: ""
	I1124 13:47:36.461127  572647 logs.go:282] 0 containers: []
	W1124 13:47:36.461139  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:47:36.461153  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:47:36.461172  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:36.499983  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:47:36.500029  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:47:36.521192  572647 logs.go:123] Gathering logs for kube-apiserver [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8] ...
	I1124 13:47:36.521223  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:36.557807  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:47:36.557859  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:47:36.611092  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:47:36.611122  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:47:36.647506  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:47:36.647538  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:47:36.773107  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:47:36.773142  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:47:36.847612  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:47:36.847637  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:47:36.847662  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:36.887116  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:47:36.887154  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:36.924700  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:47:36.924746  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:36.974655  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:47:36.974689  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:37.017086  572647 logs.go:123] Gathering logs for kube-controller-manager [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf] ...
	I1124 13:47:37.017118  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:39.548013  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:47:39.548547  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:47:39.548616  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:47:39.548676  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:47:39.577831  572647 cri.go:89] found id: "6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:39.577852  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:39.577857  572647 cri.go:89] found id: ""
	I1124 13:47:39.577867  572647 logs.go:282] 2 containers: [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:47:39.577947  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.582354  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.586625  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:47:39.586710  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:47:39.614522  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:39.614543  572647 cri.go:89] found id: ""
	I1124 13:47:39.614552  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:47:39.614607  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.619054  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:47:39.619127  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:47:39.646326  572647 cri.go:89] found id: ""
	I1124 13:47:39.646352  572647 logs.go:282] 0 containers: []
	W1124 13:47:39.646363  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:47:39.646370  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:47:39.646429  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:47:39.672725  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:39.672745  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:39.672749  572647 cri.go:89] found id: ""
	I1124 13:47:39.672757  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:47:39.672814  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.677191  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.681175  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:47:39.681258  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:47:39.708431  572647 cri.go:89] found id: ""
	I1124 13:47:39.708455  572647 logs.go:282] 0 containers: []
	W1124 13:47:39.708464  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:47:39.708470  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:47:39.708519  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:47:39.740642  572647 cri.go:89] found id: "daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:39.740666  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:39.740672  572647 cri.go:89] found id: ""
	I1124 13:47:39.740682  572647 logs.go:282] 2 containers: [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:47:39.740749  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.745558  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.749963  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:47:39.750090  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:47:39.785165  572647 cri.go:89] found id: ""
	I1124 13:47:39.785200  572647 logs.go:282] 0 containers: []
	W1124 13:47:39.785213  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:47:39.785223  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:47:39.785297  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:47:39.816314  572647 cri.go:89] found id: ""
	I1124 13:47:39.816344  572647 logs.go:282] 0 containers: []
	W1124 13:47:39.816356  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:47:39.816369  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:47:39.816386  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:39.855047  572647 logs.go:123] Gathering logs for kube-controller-manager [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf] ...
	I1124 13:47:39.855082  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:39.884850  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:47:39.884886  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:39.923160  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:47:39.923209  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:47:40.011551  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:47:40.011587  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:47:40.028754  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:47:40.028784  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:47:40.073406  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:47:40.073463  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:47:40.118088  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:47:40.118130  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:47:40.186938  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:47:40.186963  572647 logs.go:123] Gathering logs for kube-apiserver [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8] ...
	I1124 13:47:40.186979  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:40.225544  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:47:40.225575  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:40.264167  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:47:40.264212  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:40.310248  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:47:40.310285  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:40.101111  608917 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-608395
	
	I1124 13:47:40.101196  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.122644  608917 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:40.122921  608917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1124 13:47:40.122949  608917 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-608395' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-608395/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-608395' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:47:40.280196  608917 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:47:40.280226  608917 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-370498/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-370498/.minikube}
	I1124 13:47:40.280268  608917 ubuntu.go:190] setting up certificates
	I1124 13:47:40.280293  608917 provision.go:84] configureAuth start
	I1124 13:47:40.280380  608917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-608395
	I1124 13:47:40.303469  608917 provision.go:143] copyHostCerts
	I1124 13:47:40.303532  608917 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem, removing ...
	I1124 13:47:40.303543  608917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem
	I1124 13:47:40.303590  608917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem (1082 bytes)
	I1124 13:47:40.303726  608917 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem, removing ...
	I1124 13:47:40.303739  608917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem
	I1124 13:47:40.303772  608917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem (1123 bytes)
	I1124 13:47:40.303856  608917 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem, removing ...
	I1124 13:47:40.303868  608917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem
	I1124 13:47:40.303892  608917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem (1675 bytes)
	I1124 13:47:40.303983  608917 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem org=jenkins.no-preload-608395 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-608395]
	I1124 13:47:40.375070  608917 provision.go:177] copyRemoteCerts
	I1124 13:47:40.375131  608917 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:47:40.375180  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.394610  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:47:40.501959  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 13:47:40.523137  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 13:47:40.542279  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 13:47:40.562226  608917 provision.go:87] duration metric: took 281.905194ms to configureAuth
	I1124 13:47:40.562265  608917 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:47:40.562572  608917 config.go:182] Loaded profile config "no-preload-608395": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:47:40.562595  608917 machine.go:97] duration metric: took 3.832793094s to provisionDockerMachine
	I1124 13:47:40.562604  608917 client.go:176] duration metric: took 5.273718281s to LocalClient.Create
	I1124 13:47:40.562649  608917 start.go:167] duration metric: took 5.273809151s to libmachine.API.Create "no-preload-608395"
	I1124 13:47:40.562659  608917 start.go:293] postStartSetup for "no-preload-608395" (driver="docker")
	I1124 13:47:40.562671  608917 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:47:40.562721  608917 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:47:40.562769  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.582715  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:47:40.688873  608917 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:47:40.692683  608917 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:47:40.692717  608917 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:47:40.692818  608917 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/addons for local assets ...
	I1124 13:47:40.692947  608917 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/files for local assets ...
	I1124 13:47:40.693078  608917 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem -> 3741222.pem in /etc/ssl/certs
	I1124 13:47:40.693208  608917 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 13:47:40.702139  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:47:40.725883  608917 start.go:296] duration metric: took 163.205649ms for postStartSetup
	I1124 13:47:40.726376  608917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-608395
	I1124 13:47:40.744526  608917 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/config.json ...
	I1124 13:47:40.745022  608917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:47:40.745098  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.763260  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:47:40.869180  608917 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:47:40.874423  608917 start.go:128] duration metric: took 5.58807074s to createHost
	I1124 13:47:40.874458  608917 start.go:83] releasing machines lock for "no-preload-608395", held for 5.58825096s
	I1124 13:47:40.874540  608917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-608395
	I1124 13:47:40.896709  608917 ssh_runner.go:195] Run: cat /version.json
	I1124 13:47:40.896763  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.896807  608917 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:47:40.896904  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.918859  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:47:40.920576  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:47:41.084454  608917 ssh_runner.go:195] Run: systemctl --version
	I1124 13:47:41.091582  608917 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:47:41.097406  608917 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:47:41.097478  608917 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:47:41.125540  608917 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 13:47:41.125566  608917 start.go:496] detecting cgroup driver to use...
	I1124 13:47:41.125601  608917 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 13:47:41.125650  608917 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 13:47:41.148294  608917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 13:47:41.167664  608917 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:47:41.167740  608917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:47:41.189235  608917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:47:41.213594  608917 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:47:41.336134  608917 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:47:41.426955  608917 docker.go:234] disabling docker service ...
	I1124 13:47:41.427023  608917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:47:41.448189  608917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:47:41.462073  608917 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:47:41.548298  608917 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:47:41.635202  608917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:47:41.649149  608917 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:47:41.664451  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 13:47:41.676460  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 13:47:41.686131  608917 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 13:47:41.686199  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 13:47:41.695720  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:47:41.705503  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 13:47:41.714879  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:47:41.724369  608917 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:47:41.733131  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 13:47:41.742525  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 13:47:41.751826  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 13:47:41.762473  608917 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:47:41.770755  608917 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:47:41.779154  608917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:47:41.869150  608917 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 13:47:41.957807  608917 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 13:47:41.957876  608917 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 13:47:41.965431  608917 start.go:564] Will wait 60s for crictl version
	I1124 13:47:41.965500  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:41.970973  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:47:42.001317  608917 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 13:47:42.001405  608917 ssh_runner.go:195] Run: containerd --version
	I1124 13:47:42.026320  608917 ssh_runner.go:195] Run: containerd --version
	I1124 13:47:42.052318  608917 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 13:47:40.896022  607669 cli_runner.go:164] Run: docker network inspect old-k8s-version-513442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:47:40.918522  607669 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 13:47:40.923315  607669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:47:40.935781  607669 kubeadm.go:884] updating cluster {Name:old-k8s-version-513442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-513442 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:47:40.935932  607669 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 13:47:40.935998  607669 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:47:40.965650  607669 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 13:47:40.965689  607669 containerd.go:534] Images already preloaded, skipping extraction
	I1124 13:47:40.965773  607669 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:47:40.999412  607669 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 13:47:40.999441  607669 cache_images.go:86] Images are preloaded, skipping loading
	I1124 13:47:40.999451  607669 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 containerd true true} ...
	I1124 13:47:40.999568  607669 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-513442 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-513442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:47:40.999640  607669 ssh_runner.go:195] Run: sudo crictl info
	I1124 13:47:41.030216  607669 cni.go:84] Creating CNI manager for ""
	I1124 13:47:41.030250  607669 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:47:41.030273  607669 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:47:41.030304  607669 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-513442 NodeName:old-k8s-version-513442 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:47:41.030479  607669 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-513442"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:47:41.030593  607669 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 13:47:41.040496  607669 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 13:47:41.040574  607669 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:47:41.048965  607669 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1124 13:47:41.063246  607669 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:47:41.080199  607669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I1124 13:47:41.095141  607669 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:47:41.099735  607669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:47:41.111816  607669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:47:41.205774  607669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:47:41.229647  607669 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442 for IP: 192.168.94.2
	I1124 13:47:41.229678  607669 certs.go:195] generating shared ca certs ...
	I1124 13:47:41.229702  607669 certs.go:227] acquiring lock for ca certs: {Name:mk5874497fda855b1e2ff816147ffdfbc44946ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.229867  607669 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.key
	I1124 13:47:41.229906  607669 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.key
	I1124 13:47:41.229935  607669 certs.go:257] generating profile certs ...
	I1124 13:47:41.230010  607669 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.key
	I1124 13:47:41.230025  607669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.crt with IP's: []
	I1124 13:47:41.438692  607669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.crt ...
	I1124 13:47:41.438735  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.crt: {Name:mkbb44e092f1569b20ffeeea6d19871e0c7ea39c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.438903  607669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.key ...
	I1124 13:47:41.438942  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.key: {Name:mkcdbea7ce1dc4681fc91bbc4b78d2c028c94687 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.439100  607669 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key.eabc0cb4
	I1124 13:47:41.439127  607669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt.eabc0cb4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1124 13:47:41.518895  607669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt.eabc0cb4 ...
	I1124 13:47:41.518941  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt.eabc0cb4: {Name:mk47b90333d21f736ed33504f6da28b133242551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.519134  607669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key.eabc0cb4 ...
	I1124 13:47:41.519153  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key.eabc0cb4: {Name:mk4592466df77ceb7a68fa27e5f9a0201b1a8063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.519239  607669 certs.go:382] copying /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt.eabc0cb4 -> /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt
	I1124 13:47:41.519312  607669 certs.go:386] copying /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key.eabc0cb4 -> /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key
	I1124 13:47:41.519368  607669 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.key
	I1124 13:47:41.519388  607669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.crt with IP's: []
	I1124 13:47:41.757186  607669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.crt ...
	I1124 13:47:41.757217  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.crt: {Name:mkb434108adbee544176aebf04c9ed8a63b76175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.757418  607669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.key ...
	I1124 13:47:41.757442  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.key: {Name:mk640e3789cee888121bd6cc947590ae24e90dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.757683  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122.pem (1338 bytes)
	W1124 13:47:41.757725  607669 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122_empty.pem, impossibly tiny 0 bytes
	I1124 13:47:41.757736  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 13:47:41.757777  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem (1082 bytes)
	I1124 13:47:41.757814  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:47:41.757849  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem (1675 bytes)
	I1124 13:47:41.757940  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:47:41.758610  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:47:41.778634  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 13:47:41.799349  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:47:41.825279  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 13:47:41.844900  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 13:47:41.865036  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 13:47:41.887428  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:47:41.912645  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 13:47:41.937284  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:47:41.966303  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122.pem --> /usr/share/ca-certificates/374122.pem (1338 bytes)
	I1124 13:47:41.989056  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /usr/share/ca-certificates/3741222.pem (1708 bytes)
	I1124 13:47:42.011989  607669 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:47:42.027976  607669 ssh_runner.go:195] Run: openssl version
	I1124 13:47:42.036340  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3741222.pem && ln -fs /usr/share/ca-certificates/3741222.pem /etc/ssl/certs/3741222.pem"
	I1124 13:47:42.046698  607669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3741222.pem
	I1124 13:47:42.051406  607669 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:20 /usr/share/ca-certificates/3741222.pem
	I1124 13:47:42.051481  607669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3741222.pem
	I1124 13:47:42.089903  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3741222.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 13:47:42.100357  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:47:42.110986  607669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:42.115955  607669 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:42.116031  607669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:42.153310  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:47:42.163209  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/374122.pem && ln -fs /usr/share/ca-certificates/374122.pem /etc/ssl/certs/374122.pem"
	I1124 13:47:42.173625  607669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/374122.pem
	I1124 13:47:42.178229  607669 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:20 /usr/share/ca-certificates/374122.pem
	I1124 13:47:42.178308  607669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/374122.pem
	I1124 13:47:42.216281  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/374122.pem /etc/ssl/certs/51391683.0"
	I1124 13:47:42.228415  607669 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:47:42.232854  607669 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:47:42.232959  607669 kubeadm.go:401] StartCluster: {Name:old-k8s-version-513442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-513442 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:47:42.233058  607669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 13:47:42.233119  607669 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:47:42.262130  607669 cri.go:89] found id: ""
	I1124 13:47:42.262225  607669 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:47:42.271622  607669 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:47:42.280568  607669 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 13:47:42.280637  607669 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:47:42.289222  607669 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:47:42.289241  607669 kubeadm.go:158] found existing configuration files:
	
	I1124 13:47:42.289287  607669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:47:42.297481  607669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:47:42.297560  607669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:47:42.306305  607669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:47:42.315150  607669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:47:42.315224  607669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:47:42.324595  607669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:47:42.333840  607669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:47:42.333922  607669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:47:42.344021  607669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:47:42.355171  607669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:47:42.355226  607669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:47:42.364345  607669 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 13:47:42.433190  607669 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1124 13:47:42.433270  607669 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:47:42.487608  607669 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 13:47:42.487695  607669 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 13:47:42.487758  607669 kubeadm.go:319] OS: Linux
	I1124 13:47:42.487823  607669 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 13:47:42.487892  607669 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 13:47:42.487986  607669 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 13:47:42.488057  607669 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 13:47:42.488125  607669 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 13:47:42.488216  607669 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 13:47:42.488285  607669 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 13:47:42.488352  607669 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 13:47:42.585565  607669 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:47:42.585750  607669 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:47:42.585896  607669 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1124 13:47:42.762435  607669 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:47:42.054673  608917 cli_runner.go:164] Run: docker network inspect no-preload-608395 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:47:42.073094  608917 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1124 13:47:42.078208  608917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:47:42.089858  608917 kubeadm.go:884] updating cluster {Name:no-preload-608395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-608395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:47:42.090126  608917 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 13:47:42.090181  608917 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:47:42.117576  608917 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1124 13:47:42.117601  608917 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1124 13:47:42.117671  608917 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:42.117683  608917 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.117696  608917 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 13:47:42.117708  608917 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.117683  608917 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.117737  608917 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.117738  608917 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.117773  608917 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.119957  608917 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.120028  608917 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.120041  608917 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.120103  608917 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.120144  608917 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.120206  608917 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 13:47:42.120361  608917 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:42.120651  608917 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.324599  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1124 13:47:42.324658  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.329752  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1124 13:47:42.329811  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1124 13:47:42.340410  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1124 13:47:42.340483  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.345994  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1124 13:47:42.346082  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.350632  608917 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1124 13:47:42.350771  608917 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.350861  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.354889  608917 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1124 13:47:42.355021  608917 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1124 13:47:42.355078  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.365506  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1124 13:47:42.365584  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.370164  608917 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1124 13:47:42.370246  608917 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.370299  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.371573  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.371569  608917 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1124 13:47:42.371633  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 13:47:42.371663  608917 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.371700  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.383984  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115"
	I1124 13:47:42.384064  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.391339  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1124 13:47:42.391424  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.394058  608917 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1124 13:47:42.394107  608917 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.394139  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.394173  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.394139  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.410796  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 13:47:42.412029  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.415223  608917 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1124 13:47:42.415273  608917 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.415318  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.430558  608917 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1124 13:47:42.430610  608917 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.430661  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.432115  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.432240  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.432710  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.449068  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 13:47:42.451309  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.451333  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.451434  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.471426  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.471426  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.472006  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.507575  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1124 13:47:42.507696  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1124 13:47:42.507737  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.507752  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.507776  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1124 13:47:42.507812  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 13:47:42.512031  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1124 13:47:42.512160  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1124 13:47:42.512183  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.512220  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1124 13:47:42.512281  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1124 13:47:42.542249  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1124 13:47:42.542293  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1124 13:47:42.542356  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.542419  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.542436  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1124 13:47:42.542450  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1124 13:47:42.542460  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1124 13:47:42.542482  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1124 13:47:42.542522  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1124 13:47:42.542541  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1124 13:47:42.547506  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1124 13:47:42.547609  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 13:47:42.591222  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1124 13:47:42.591265  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1124 13:47:42.591339  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1124 13:47:42.591358  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1124 13:47:42.630891  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1124 13:47:42.630960  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1124 13:47:42.635881  608917 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1124 13:47:42.635984  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1124 13:47:42.696822  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1124 13:47:42.696868  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1124 13:47:42.696964  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1124 13:47:42.696987  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1124 13:47:42.855586  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1124 13:47:43.017613  608917 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1124 13:47:43.017692  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1124 13:47:43.363331  608917 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1124 13:47:43.363429  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:44.322473  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.304751727s)
	I1124 13:47:44.322506  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1124 13:47:44.322534  608917 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1124 13:47:44.322535  608917 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1124 13:47:44.322572  608917 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:44.322581  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1124 13:47:44.322611  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:44.327186  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:42.765072  607669 out.go:252]   - Generating certificates and keys ...
	I1124 13:47:42.765189  607669 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 13:47:42.765429  607669 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 13:47:42.918631  607669 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 13:47:43.145530  607669 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 13:47:43.262863  607669 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 13:47:43.516853  607669 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 13:47:43.680193  607669 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 13:47:43.680382  607669 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-513442] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 13:47:43.927450  607669 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 13:47:43.927668  607669 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-513442] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 13:47:44.210866  607669 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 13:47:44.444469  607669 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 13:47:44.571652  607669 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 13:47:44.571791  607669 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 13:47:44.658495  607669 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 13:47:44.899827  607669 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 13:47:45.259836  607669 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 13:47:45.407067  607669 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 13:47:45.407645  607669 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 13:47:45.412109  607669 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 13:47:42.868629  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:47:45.407011  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.084400483s)
	I1124 13:47:45.407048  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1124 13:47:45.407074  608917 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1124 13:47:45.407121  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1124 13:47:45.407011  608917 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.079785919s)
	I1124 13:47:45.407225  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:46.754417  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.347254819s)
	I1124 13:47:46.754464  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1124 13:47:46.754487  608917 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 13:47:46.754539  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 13:47:46.754423  608917 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.34716741s)
	I1124 13:47:46.754625  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:46.791381  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1124 13:47:46.791500  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1124 13:47:48.250258  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.49567347s)
	I1124 13:47:48.250293  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1124 13:47:48.250322  608917 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 13:47:48.250369  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 13:47:48.250393  608917 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.458859359s)
	I1124 13:47:48.250436  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1124 13:47:48.250458  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1124 13:47:49.525346  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.274952475s)
	I1124 13:47:49.525372  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1124 13:47:49.525397  608917 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1124 13:47:49.525432  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1124 13:47:45.413783  607669 out.go:252]   - Booting up control plane ...
	I1124 13:47:45.414000  607669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 13:47:45.414122  607669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 13:47:45.415606  607669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 13:47:45.433197  607669 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 13:47:45.434777  607669 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 13:47:45.434850  607669 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 13:47:45.555124  607669 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1124 13:47:47.870054  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 13:47:47.870131  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:47:47.870207  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:47:47.909612  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:47:47.909637  572647 cri.go:89] found id: "6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:47.909644  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:47.909649  572647 cri.go:89] found id: ""
	I1124 13:47:47.909660  572647 logs.go:282] 3 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:47:47.909721  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:47.915163  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:47.920826  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:47.926251  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:47:47.926326  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:47:47.968362  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:47.968399  572647 cri.go:89] found id: ""
	I1124 13:47:47.968412  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:47:47.968487  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:47.973840  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:47:47.973955  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:47:48.011120  572647 cri.go:89] found id: ""
	I1124 13:47:48.011151  572647 logs.go:282] 0 containers: []
	W1124 13:47:48.011163  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:47:48.011172  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:47:48.011242  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:47:48.049409  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:48.049433  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:48.049439  572647 cri.go:89] found id: ""
	I1124 13:47:48.049449  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:47:48.049612  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:48.055041  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:48.061717  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:47:48.061795  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:47:48.098008  572647 cri.go:89] found id: ""
	I1124 13:47:48.098036  572647 logs.go:282] 0 containers: []
	W1124 13:47:48.098048  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:47:48.098056  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:47:48.098116  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:47:48.134832  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:47:48.134858  572647 cri.go:89] found id: "daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:48.134864  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:48.134868  572647 cri.go:89] found id: ""
	I1124 13:47:48.134879  572647 logs.go:282] 3 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:47:48.134960  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:48.140512  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:48.146067  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:48.151167  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:47:48.151293  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:47:48.194241  572647 cri.go:89] found id: ""
	I1124 13:47:48.194275  572647 logs.go:282] 0 containers: []
	W1124 13:47:48.194287  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:47:48.194297  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:47:48.194366  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:47:48.235586  572647 cri.go:89] found id: ""
	I1124 13:47:48.235617  572647 logs.go:282] 0 containers: []
	W1124 13:47:48.235629  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:47:48.235644  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:47:48.235660  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:48.322131  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:47:48.322175  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:47:48.358925  572647 logs.go:123] Gathering logs for kube-controller-manager [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf] ...
	I1124 13:47:48.358964  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:48.399403  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:47:48.399439  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:47:48.442576  572647 logs.go:123] Gathering logs for kube-apiserver [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8] ...
	I1124 13:47:48.442621  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:48.490297  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:47:48.490336  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:48.543239  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:47:48.543277  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:48.591561  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:47:48.591604  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:48.639975  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:47:48.640012  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:47:48.703335  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:47:48.703393  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:47:48.760778  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:47:48.760820  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:47:48.887283  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:47:48.887328  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:47:48.915138  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:47:48.915177  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1124 13:47:50.557442  607669 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.002632 seconds
	I1124 13:47:50.557627  607669 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 13:47:50.572390  607669 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 13:47:51.098533  607669 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 13:47:51.098764  607669 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-513442 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 13:47:51.610053  607669 kubeadm.go:319] [bootstrap-token] Using token: eki30b.4i7191y9601t9kqb
	I1124 13:47:51.611988  607669 out.go:252]   - Configuring RBAC rules ...
	I1124 13:47:51.612142  607669 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 13:47:51.618056  607669 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 13:47:51.627751  607669 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 13:47:51.631902  607669 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 13:47:51.635666  607669 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 13:47:51.643042  607669 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 13:47:51.655046  607669 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 13:47:51.879254  607669 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 13:47:52.022857  607669 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 13:47:52.024273  607669 kubeadm.go:319] 
	I1124 13:47:52.024439  607669 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 13:47:52.024451  607669 kubeadm.go:319] 
	I1124 13:47:52.024565  607669 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 13:47:52.024593  607669 kubeadm.go:319] 
	I1124 13:47:52.024628  607669 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 13:47:52.024712  607669 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 13:47:52.024786  607669 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 13:47:52.024795  607669 kubeadm.go:319] 
	I1124 13:47:52.024870  607669 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 13:47:52.024880  607669 kubeadm.go:319] 
	I1124 13:47:52.024984  607669 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 13:47:52.024995  607669 kubeadm.go:319] 
	I1124 13:47:52.025066  607669 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 13:47:52.025175  607669 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 13:47:52.025273  607669 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 13:47:52.025282  607669 kubeadm.go:319] 
	I1124 13:47:52.025399  607669 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 13:47:52.025508  607669 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 13:47:52.025517  607669 kubeadm.go:319] 
	I1124 13:47:52.025633  607669 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token eki30b.4i7191y9601t9kqb \
	I1124 13:47:52.025782  607669 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:32fb1839a00503b33822b75b81c2f42d5061d18404c0a5cd12189dec7e20658c \
	I1124 13:47:52.025814  607669 kubeadm.go:319] 	--control-plane 
	I1124 13:47:52.025823  607669 kubeadm.go:319] 
	I1124 13:47:52.025955  607669 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 13:47:52.025964  607669 kubeadm.go:319] 
	I1124 13:47:52.026081  607669 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token eki30b.4i7191y9601t9kqb \
	I1124 13:47:52.026226  607669 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:32fb1839a00503b33822b75b81c2f42d5061d18404c0a5cd12189dec7e20658c 
	I1124 13:47:52.029215  607669 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 13:47:52.029395  607669 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 13:47:52.029436  607669 cni.go:84] Creating CNI manager for ""
	I1124 13:47:52.029450  607669 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:47:52.032075  607669 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 13:47:52.378094  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (2.852631537s)
	I1124 13:47:52.378131  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1124 13:47:52.378164  608917 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 13:47:52.378216  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1124 13:47:52.826755  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 13:47:52.826808  608917 cache_images.go:125] Successfully loaded all cached images
	I1124 13:47:52.826816  608917 cache_images.go:94] duration metric: took 10.70919772s to LoadCachedImages
	I1124 13:47:52.826831  608917 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 containerd true true} ...
	I1124 13:47:52.826984  608917 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-608395 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-608395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:47:52.827057  608917 ssh_runner.go:195] Run: sudo crictl info
	I1124 13:47:52.858503  608917 cni.go:84] Creating CNI manager for ""
	I1124 13:47:52.858531  608917 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:47:52.858557  608917 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:47:52.858588  608917 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-608395 NodeName:no-preload-608395 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:47:52.858757  608917 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-608395"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:47:52.858835  608917 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 13:47:52.869416  608917 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1124 13:47:52.869483  608917 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1124 13:47:52.881260  608917 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1124 13:47:52.881274  608917 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1124 13:47:52.881284  608917 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1124 13:47:52.881370  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1124 13:47:52.886648  608917 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1124 13:47:52.886683  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1124 13:47:53.829310  608917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:47:53.844364  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1124 13:47:53.848663  608917 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1124 13:47:53.848703  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1124 13:47:54.078871  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1124 13:47:54.083904  608917 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1124 13:47:54.083971  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1124 13:47:54.263727  608917 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:47:54.272819  608917 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1124 13:47:54.287533  608917 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:47:54.307319  608917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1124 13:47:54.321728  608917 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:47:54.326108  608917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:47:54.337568  608917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:47:54.423252  608917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:47:54.446892  608917 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395 for IP: 192.168.103.2
	I1124 13:47:54.446932  608917 certs.go:195] generating shared ca certs ...
	I1124 13:47:54.446950  608917 certs.go:227] acquiring lock for ca certs: {Name:mk5874497fda855b1e2ff816147ffdfbc44946ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.447115  608917 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.key
	I1124 13:47:54.447173  608917 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.key
	I1124 13:47:54.447189  608917 certs.go:257] generating profile certs ...
	I1124 13:47:54.447250  608917 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.key
	I1124 13:47:54.447265  608917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt with IP's: []
	I1124 13:47:54.480111  608917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt ...
	I1124 13:47:54.480143  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt: {Name:mk0373d89f453529126dca865f8c4273a9b76c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.480318  608917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.key ...
	I1124 13:47:54.480326  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.key: {Name:mkd4fd6c97a850045d4415dcd6682504ca05b6b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.480412  608917 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key.211f6cd0
	I1124 13:47:54.480432  608917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt.211f6cd0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1124 13:47:54.564575  608917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt.211f6cd0 ...
	I1124 13:47:54.564606  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt.211f6cd0: {Name:mk39921501aaa8b9dfdaa0c59584189fbc232834 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.564812  608917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key.211f6cd0 ...
	I1124 13:47:54.564832  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key.211f6cd0: {Name:mk1e5ec23cae444088ab39a7c9f4bd7f0b68695e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.565002  608917 certs.go:382] copying /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt.211f6cd0 -> /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt
	I1124 13:47:54.565092  608917 certs.go:386] copying /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key.211f6cd0 -> /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key
	I1124 13:47:54.565147  608917 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.key
	I1124 13:47:54.565166  608917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.crt with IP's: []
	I1124 13:47:54.682010  608917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.crt ...
	I1124 13:47:54.682042  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.crt: {Name:mk61707e6277a856c1f1cee667479489cd8cfc56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.682251  608917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.key ...
	I1124 13:47:54.682270  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.key: {Name:mkdc07f88aff1f58330c9757ac629acf2062c9ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.682520  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122.pem (1338 bytes)
	W1124 13:47:54.682564  608917 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122_empty.pem, impossibly tiny 0 bytes
	I1124 13:47:54.682574  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 13:47:54.682602  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem (1082 bytes)
	I1124 13:47:54.682626  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:47:54.682651  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem (1675 bytes)
	I1124 13:47:54.682697  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:47:54.683371  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:47:54.703387  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 13:47:54.722770  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:47:54.743107  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 13:47:54.763697  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 13:47:54.783164  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 13:47:54.802752  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:47:54.822653  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 13:47:54.843126  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122.pem --> /usr/share/ca-certificates/374122.pem (1338 bytes)
	I1124 13:47:54.867619  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /usr/share/ca-certificates/3741222.pem (1708 bytes)
	I1124 13:47:54.887814  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:47:54.907876  608917 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:47:54.922379  608917 ssh_runner.go:195] Run: openssl version
	I1124 13:47:54.929636  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/374122.pem && ln -fs /usr/share/ca-certificates/374122.pem /etc/ssl/certs/374122.pem"
	I1124 13:47:54.940237  608917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/374122.pem
	I1124 13:47:54.944856  608917 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:20 /usr/share/ca-certificates/374122.pem
	I1124 13:47:54.944961  608917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/374122.pem
	I1124 13:47:54.983788  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/374122.pem /etc/ssl/certs/51391683.0"
	I1124 13:47:54.994031  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3741222.pem && ln -fs /usr/share/ca-certificates/3741222.pem /etc/ssl/certs/3741222.pem"
	I1124 13:47:55.004849  608917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3741222.pem
	I1124 13:47:55.010168  608917 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:20 /usr/share/ca-certificates/3741222.pem
	I1124 13:47:55.010231  608917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3741222.pem
	I1124 13:47:55.048930  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3741222.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 13:47:55.058618  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:47:55.068496  608917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:52.033462  607669 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 13:47:52.040052  607669 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1124 13:47:52.040080  607669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 13:47:52.058896  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 13:47:52.863538  607669 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 13:47:52.863612  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:52.863691  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-513442 minikube.k8s.io/updated_at=2025_11_24T13_47_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=old-k8s-version-513442 minikube.k8s.io/primary=true
	I1124 13:47:52.876635  607669 ops.go:34] apiserver oom_adj: -16
	I1124 13:47:52.948231  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:53.449196  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:53.948546  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:54.448277  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:54.949098  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:55.073505  608917 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:55.073568  608917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:55.110353  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:47:55.120226  608917 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:47:55.124508  608917 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:47:55.124574  608917 kubeadm.go:401] StartCluster: {Name:no-preload-608395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-608395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:47:55.124676  608917 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 13:47:55.124734  608917 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:47:55.153610  608917 cri.go:89] found id: ""
	I1124 13:47:55.153686  608917 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:47:55.163237  608917 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:47:55.172281  608917 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 13:47:55.172352  608917 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:47:55.181432  608917 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:47:55.181458  608917 kubeadm.go:158] found existing configuration files:
	
	I1124 13:47:55.181515  608917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:47:55.190814  608917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:47:55.190897  608917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:47:55.200577  608917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:47:55.210272  608917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:47:55.210344  608917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:47:55.219990  608917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:47:55.228828  608917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:47:55.228885  608917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:47:55.238104  608917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:47:55.246631  608917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:47:55.246745  608917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:47:55.255509  608917 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 13:47:55.316154  608917 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 13:47:55.376542  608917 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 13:47:55.448626  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:55.949156  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:56.449055  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:56.949140  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:57.448946  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:57.948732  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:58.448437  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:58.948803  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:59.449172  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:59.948946  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:59.001079  572647 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.085873793s)
	W1124 13:47:59.001127  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1124 13:47:59.001145  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:47:59.001163  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:00.448856  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:00.948957  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:01.448664  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:01.948985  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:02.448486  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:02.948890  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:03.448380  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:03.948515  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:04.448564  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:04.527535  607669 kubeadm.go:1114] duration metric: took 11.66399569s to wait for elevateKubeSystemPrivileges
	I1124 13:48:04.527576  607669 kubeadm.go:403] duration metric: took 22.29462596s to StartCluster
	I1124 13:48:04.527612  607669 settings.go:142] acquiring lock: {Name:mka599a3c9bae62ffb84d261186583052ce40f68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:48:04.527702  607669 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:48:04.529054  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/kubeconfig: {Name:mk44e8f04ffd8592063c19ad1e339ad14aaa66a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:48:04.529299  607669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 13:48:04.529306  607669 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 13:48:04.529383  607669 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 13:48:04.529498  607669 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-513442"
	I1124 13:48:04.529517  607669 config.go:182] Loaded profile config "old-k8s-version-513442": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 13:48:04.529519  607669 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-513442"
	I1124 13:48:04.529535  607669 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-513442"
	I1124 13:48:04.529561  607669 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-513442"
	I1124 13:48:04.529641  607669 host.go:66] Checking if "old-k8s-version-513442" exists ...
	I1124 13:48:04.529946  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:48:04.530180  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:48:04.531152  607669 out.go:179] * Verifying Kubernetes components...
	I1124 13:48:04.532717  607669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:48:04.557008  607669 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:48:04.558405  607669 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:48:04.558429  607669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 13:48:04.558495  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:48:04.562314  607669 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-513442"
	I1124 13:48:04.562381  607669 host.go:66] Checking if "old-k8s-version-513442" exists ...
	I1124 13:48:04.563175  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:48:04.584062  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:48:04.598587  607669 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 13:48:04.598613  607669 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 13:48:04.598683  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:48:04.628606  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:48:04.653771  607669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 13:48:04.701037  607669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:48:04.714197  607669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:48:04.765729  607669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 13:48:04.912320  607669 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1124 13:48:04.913621  607669 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-513442" to be "Ready" ...
	I1124 13:48:05.136398  607669 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 13:48:05.160590  608917 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 13:48:05.160664  608917 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:48:05.160771  608917 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 13:48:05.160854  608917 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 13:48:05.160886  608917 kubeadm.go:319] OS: Linux
	I1124 13:48:05.160993  608917 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 13:48:05.161038  608917 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 13:48:05.161128  608917 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 13:48:05.161215  608917 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 13:48:05.161290  608917 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 13:48:05.161348  608917 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 13:48:05.161407  608917 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 13:48:05.161478  608917 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 13:48:05.161607  608917 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:48:05.161758  608917 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:48:05.161894  608917 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 13:48:05.162009  608917 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:48:05.163691  608917 out.go:252]   - Generating certificates and keys ...
	I1124 13:48:05.163805  608917 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 13:48:05.163947  608917 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 13:48:05.164054  608917 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 13:48:05.164154  608917 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 13:48:05.164250  608917 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 13:48:05.164325  608917 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 13:48:05.164403  608917 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 13:48:05.164579  608917 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-608395] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1124 13:48:05.164662  608917 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 13:48:05.164844  608917 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-608395] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1124 13:48:05.164993  608917 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 13:48:05.165088  608917 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 13:48:05.165130  608917 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 13:48:05.165182  608917 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 13:48:05.165250  608917 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 13:48:05.165313  608917 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 13:48:05.165382  608917 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 13:48:05.165456  608917 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 13:48:05.165506  608917 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 13:48:05.165580  608917 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 13:48:05.165637  608917 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 13:48:05.167858  608917 out.go:252]   - Booting up control plane ...
	I1124 13:48:05.167962  608917 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 13:48:05.168043  608917 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 13:48:05.168104  608917 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 13:48:05.168199  608917 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 13:48:05.168298  608917 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 13:48:05.168436  608917 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 13:48:05.168514  608917 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 13:48:05.168558  608917 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 13:48:05.168715  608917 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 13:48:05.168854  608917 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 13:48:05.168953  608917 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001985013s
	I1124 13:48:05.169093  608917 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 13:48:05.169202  608917 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1124 13:48:05.169339  608917 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 13:48:05.169461  608917 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 13:48:05.169582  608917 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.171045551s
	I1124 13:48:05.169691  608917 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.746683308s
	I1124 13:48:05.169782  608917 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.002983514s
	I1124 13:48:05.169958  608917 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 13:48:05.170079  608917 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 13:48:05.170136  608917 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 13:48:05.170449  608917 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-608395 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 13:48:05.170534  608917 kubeadm.go:319] [bootstrap-token] Using token: 0m3tk6.bp5t9g266aj6zg5e
	I1124 13:48:05.172344  608917 out.go:252]   - Configuring RBAC rules ...
	I1124 13:48:05.172497  608917 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 13:48:05.172606  608917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 13:48:05.172790  608917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 13:48:05.172947  608917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 13:48:05.173067  608917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 13:48:05.173152  608917 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 13:48:05.173251  608917 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 13:48:05.173290  608917 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 13:48:05.173330  608917 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 13:48:05.173336  608917 kubeadm.go:319] 
	I1124 13:48:05.173391  608917 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 13:48:05.173397  608917 kubeadm.go:319] 
	I1124 13:48:05.173470  608917 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 13:48:05.173476  608917 kubeadm.go:319] 
	I1124 13:48:05.173498  608917 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 13:48:05.173553  608917 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 13:48:05.173610  608917 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 13:48:05.173623  608917 kubeadm.go:319] 
	I1124 13:48:05.173669  608917 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 13:48:05.173675  608917 kubeadm.go:319] 
	I1124 13:48:05.173718  608917 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 13:48:05.173727  608917 kubeadm.go:319] 
	I1124 13:48:05.173778  608917 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 13:48:05.173858  608917 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 13:48:05.173981  608917 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 13:48:05.173990  608917 kubeadm.go:319] 
	I1124 13:48:05.174085  608917 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 13:48:05.174165  608917 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 13:48:05.174170  608917 kubeadm.go:319] 
	I1124 13:48:05.174250  608917 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0m3tk6.bp5t9g266aj6zg5e \
	I1124 13:48:05.174352  608917 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:32fb1839a00503b33822b75b81c2f42d5061d18404c0a5cd12189dec7e20658c \
	I1124 13:48:05.174376  608917 kubeadm.go:319] 	--control-plane 
	I1124 13:48:05.174381  608917 kubeadm.go:319] 
	I1124 13:48:05.174459  608917 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 13:48:05.174465  608917 kubeadm.go:319] 
	I1124 13:48:05.174560  608917 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0m3tk6.bp5t9g266aj6zg5e \
	I1124 13:48:05.174802  608917 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:32fb1839a00503b33822b75b81c2f42d5061d18404c0a5cd12189dec7e20658c 
	I1124 13:48:05.174826  608917 cni.go:84] Creating CNI manager for ""
	I1124 13:48:05.174836  608917 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:48:05.177484  608917 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 13:48:05.137677  607669 addons.go:530] duration metric: took 608.290782ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 13:48:01.553682  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:02.346718  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:51122->192.168.76.2:8443: read: connection reset by peer
	I1124 13:48:02.346797  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:02.346868  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:02.379430  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:02.379461  572647 cri.go:89] found id: "6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:48:02.379468  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:02.379472  572647 cri.go:89] found id: ""
	I1124 13:48:02.379481  572647 logs.go:282] 3 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:02.379554  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.384666  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.389028  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.393413  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:02.393493  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:02.423298  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:02.423317  572647 cri.go:89] found id: ""
	I1124 13:48:02.423325  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:02.423377  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.428323  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:02.428396  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:02.458971  572647 cri.go:89] found id: ""
	I1124 13:48:02.459002  572647 logs.go:282] 0 containers: []
	W1124 13:48:02.459014  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:02.459023  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:02.459136  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:02.495221  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:02.495253  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:02.495258  572647 cri.go:89] found id: ""
	I1124 13:48:02.495267  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:02.495325  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.504536  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.513709  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:02.513782  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:02.545556  572647 cri.go:89] found id: ""
	I1124 13:48:02.545589  572647 logs.go:282] 0 containers: []
	W1124 13:48:02.545603  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:02.545613  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:02.545686  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:02.575683  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:02.575710  572647 cri.go:89] found id: "daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:48:02.575714  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:02.575717  572647 cri.go:89] found id: ""
	I1124 13:48:02.575725  572647 logs.go:282] 3 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:02.575799  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.580340  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.584784  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.588717  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:02.588774  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:02.617522  572647 cri.go:89] found id: ""
	I1124 13:48:02.617550  572647 logs.go:282] 0 containers: []
	W1124 13:48:02.617558  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:02.617567  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:02.617616  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:02.647375  572647 cri.go:89] found id: ""
	I1124 13:48:02.647407  572647 logs.go:282] 0 containers: []
	W1124 13:48:02.647418  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:02.647432  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:02.647445  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:02.685850  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:02.685900  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:02.794118  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:02.794164  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:02.866960  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:02.866982  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:02.866997  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:02.908627  572647 logs.go:123] Gathering logs for kube-apiserver [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8] ...
	I1124 13:48:02.908671  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:48:02.949348  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:02.949380  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:02.997498  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:02.997541  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:03.065816  572647 logs.go:123] Gathering logs for kube-controller-manager [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf] ...
	I1124 13:48:03.065856  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:48:03.101360  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:03.101393  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:03.140140  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:03.140183  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:03.160020  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:03.160058  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:03.202092  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:03.202136  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:03.247020  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:03.247060  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:03.283475  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:03.283518  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:05.832996  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:05.833478  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:05.833543  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:05.833607  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:05.862229  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:05.862254  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:05.862258  572647 cri.go:89] found id: ""
	I1124 13:48:05.862267  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:05.862320  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:05.867091  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:05.871378  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:05.871455  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:05.900338  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:05.900361  572647 cri.go:89] found id: ""
	I1124 13:48:05.900370  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:05.900428  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:05.904531  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:05.904606  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:05.933536  572647 cri.go:89] found id: ""
	I1124 13:48:05.933565  572647 logs.go:282] 0 containers: []
	W1124 13:48:05.933579  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:05.933587  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:05.933645  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:05.961942  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:05.961966  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:05.961980  572647 cri.go:89] found id: ""
	I1124 13:48:05.961988  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:05.962048  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:05.966413  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:05.970560  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:05.970640  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:05.999021  572647 cri.go:89] found id: ""
	I1124 13:48:05.999046  572647 logs.go:282] 0 containers: []
	W1124 13:48:05.999057  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:05.999065  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:05.999125  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:06.030192  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:06.030216  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:06.030222  572647 cri.go:89] found id: ""
	I1124 13:48:06.030233  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:06.030291  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:06.034509  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:06.038518  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:06.038602  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:06.067432  572647 cri.go:89] found id: ""
	I1124 13:48:06.067459  572647 logs.go:282] 0 containers: []
	W1124 13:48:06.067469  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:06.067477  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:06.067557  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:06.098683  572647 cri.go:89] found id: ""
	I1124 13:48:06.098712  572647 logs.go:282] 0 containers: []
	W1124 13:48:06.098723  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:06.098736  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:06.098753  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:06.163737  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:06.163765  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:06.163783  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:05.179143  608917 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 13:48:05.184780  608917 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 13:48:05.184802  608917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 13:48:05.199547  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 13:48:05.451312  608917 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 13:48:05.451481  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:05.451599  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-608395 minikube.k8s.io/updated_at=2025_11_24T13_48_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=no-preload-608395 minikube.k8s.io/primary=true
	I1124 13:48:05.479434  608917 ops.go:34] apiserver oom_adj: -16
	I1124 13:48:05.560179  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:06.061204  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:06.560802  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:07.061219  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:07.561139  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:08.061015  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:08.561034  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:09.061268  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:09.560397  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:09.636185  608917 kubeadm.go:1114] duration metric: took 4.184744627s to wait for elevateKubeSystemPrivileges
	I1124 13:48:09.636235  608917 kubeadm.go:403] duration metric: took 14.511667218s to StartCluster
	I1124 13:48:09.636257  608917 settings.go:142] acquiring lock: {Name:mka599a3c9bae62ffb84d261186583052ce40f68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:48:09.636332  608917 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:48:09.637980  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/kubeconfig: {Name:mk44e8f04ffd8592063c19ad1e339ad14aaa66a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:48:09.638233  608917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 13:48:09.638262  608917 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 13:48:09.638340  608917 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 13:48:09.638439  608917 addons.go:70] Setting storage-provisioner=true in profile "no-preload-608395"
	I1124 13:48:09.638460  608917 addons.go:239] Setting addon storage-provisioner=true in "no-preload-608395"
	I1124 13:48:09.638459  608917 addons.go:70] Setting default-storageclass=true in profile "no-preload-608395"
	I1124 13:48:09.638486  608917 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-608395"
	I1124 13:48:09.638512  608917 host.go:66] Checking if "no-preload-608395" exists ...
	I1124 13:48:09.638608  608917 config.go:182] Loaded profile config "no-preload-608395": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:48:09.638889  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:48:09.639090  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:48:09.640719  608917 out.go:179] * Verifying Kubernetes components...
	I1124 13:48:09.642235  608917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:48:09.665980  608917 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:48:09.668239  608917 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:48:09.668262  608917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 13:48:09.668334  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:48:09.668545  608917 addons.go:239] Setting addon default-storageclass=true in "no-preload-608395"
	I1124 13:48:09.668594  608917 host.go:66] Checking if "no-preload-608395" exists ...
	I1124 13:48:09.669115  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:48:09.708052  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:48:09.711213  608917 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 13:48:09.711236  608917 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 13:48:09.711297  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:48:09.737250  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:48:09.745340  608917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 13:48:09.808489  608917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:48:09.832661  608917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:48:09.863280  608917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 13:48:09.941101  608917 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1124 13:48:09.942521  608917 node_ready.go:35] waiting up to 6m0s for node "no-preload-608395" to be "Ready" ...
	I1124 13:48:10.163475  608917 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 13:48:05.418106  607669 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-513442" context rescaled to 1 replicas
	W1124 13:48:06.917478  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	W1124 13:48:09.417409  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	I1124 13:48:06.199640  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:06.199675  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:06.235793  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:06.235827  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:06.290172  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:06.290212  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:06.325935  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:06.325975  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:06.359485  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:06.359523  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:06.406787  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:06.406834  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:06.503206  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:06.503251  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:06.520877  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:06.520924  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:06.561472  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:06.561510  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:06.591722  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:06.591748  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:09.128043  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:09.128549  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:09.128609  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:09.128678  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:09.158194  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:09.158216  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:09.158220  572647 cri.go:89] found id: ""
	I1124 13:48:09.158229  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:09.158308  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.162575  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.167402  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:09.167472  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:09.196608  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:09.196633  572647 cri.go:89] found id: ""
	I1124 13:48:09.196645  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:09.196709  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.201107  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:09.201190  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:09.232265  572647 cri.go:89] found id: ""
	I1124 13:48:09.232300  572647 logs.go:282] 0 containers: []
	W1124 13:48:09.232311  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:09.232320  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:09.232386  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:09.272990  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:09.273017  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:09.273022  572647 cri.go:89] found id: ""
	I1124 13:48:09.273033  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:09.273100  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.278614  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.283409  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:09.283485  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:09.314562  572647 cri.go:89] found id: ""
	I1124 13:48:09.314592  572647 logs.go:282] 0 containers: []
	W1124 13:48:09.314604  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:09.314611  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:09.314682  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:09.346903  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:09.346963  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:09.346970  572647 cri.go:89] found id: ""
	I1124 13:48:09.346979  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:09.347049  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.351444  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.355601  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:09.355675  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:09.387667  572647 cri.go:89] found id: ""
	I1124 13:48:09.387697  572647 logs.go:282] 0 containers: []
	W1124 13:48:09.387709  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:09.387716  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:09.387779  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:09.417828  572647 cri.go:89] found id: ""
	I1124 13:48:09.417854  572647 logs.go:282] 0 containers: []
	W1124 13:48:09.417863  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:09.417876  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:09.417894  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:09.518663  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:09.518707  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:09.538049  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:09.538093  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:09.606209  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:09.606232  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:09.606246  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:09.646703  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:09.646736  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:09.708037  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:09.708078  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:09.779698  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:09.779735  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:09.819613  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:09.819663  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:09.867349  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:09.867388  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:09.917580  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:09.917620  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:09.959751  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:09.959793  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:10.006236  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:10.006274  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:10.165110  608917 addons.go:530] duration metric: took 526.764143ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 13:48:10.444998  608917 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-608395" context rescaled to 1 replicas
	W1124 13:48:11.948043  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	W1124 13:48:14.445721  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	W1124 13:48:11.417485  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	W1124 13:48:13.418201  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	I1124 13:48:12.563487  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:12.564031  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:12.564091  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:12.564151  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:12.598524  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:12.598553  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:12.598559  572647 cri.go:89] found id: ""
	I1124 13:48:12.598570  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:12.598654  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.603466  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.608383  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:12.608462  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:12.652395  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:12.652422  572647 cri.go:89] found id: ""
	I1124 13:48:12.652433  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:12.652503  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.657966  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:12.658060  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:12.693432  572647 cri.go:89] found id: ""
	I1124 13:48:12.693468  572647 logs.go:282] 0 containers: []
	W1124 13:48:12.693480  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:12.693489  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:12.693558  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:12.731546  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:12.731572  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:12.731579  572647 cri.go:89] found id: ""
	I1124 13:48:12.731590  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:12.731820  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.737055  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.741859  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:12.741953  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:12.776627  572647 cri.go:89] found id: ""
	I1124 13:48:12.776652  572647 logs.go:282] 0 containers: []
	W1124 13:48:12.776660  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:12.776667  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:12.776735  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:12.809077  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:12.809099  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:12.809102  572647 cri.go:89] found id: ""
	I1124 13:48:12.809112  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:12.809166  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.813963  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.818488  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:12.818563  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:12.852844  572647 cri.go:89] found id: ""
	I1124 13:48:12.852879  572647 logs.go:282] 0 containers: []
	W1124 13:48:12.852891  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:12.852900  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:12.853034  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:12.889177  572647 cri.go:89] found id: ""
	I1124 13:48:12.889228  572647 logs.go:282] 0 containers: []
	W1124 13:48:12.889240  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:12.889255  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:12.889278  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:12.941108  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:12.941146  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:13.012950  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:13.012998  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:13.059324  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:13.059367  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:13.096188  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:13.096235  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:13.157287  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:13.157338  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:13.198203  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:13.198250  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:13.219729  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:13.219773  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:13.293315  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:13.293338  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:13.293356  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:13.338975  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:13.339029  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:13.385546  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:13.385596  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:13.427130  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:13.427162  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:16.027717  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:16.028251  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:16.028310  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:16.028363  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:16.058811  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:16.058839  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:16.058847  572647 cri.go:89] found id: ""
	I1124 13:48:16.058858  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:16.058999  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.063797  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.068208  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:16.068282  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:16.097374  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:16.097404  572647 cri.go:89] found id: ""
	I1124 13:48:16.097416  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:16.097484  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.102967  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:16.103045  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:16.133626  572647 cri.go:89] found id: ""
	I1124 13:48:16.133660  572647 logs.go:282] 0 containers: []
	W1124 13:48:16.133670  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:16.133676  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:16.133746  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:16.165392  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:16.165424  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:16.165431  572647 cri.go:89] found id: ""
	I1124 13:48:16.165442  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:16.165507  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.170277  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.174579  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:16.174661  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	W1124 13:48:16.445831  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	W1124 13:48:18.945868  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	W1124 13:48:15.917184  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	W1124 13:48:17.917526  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	I1124 13:48:19.416721  607669 node_ready.go:49] node "old-k8s-version-513442" is "Ready"
	I1124 13:48:19.416760  607669 node_ready.go:38] duration metric: took 14.503103561s for node "old-k8s-version-513442" to be "Ready" ...
	I1124 13:48:19.416778  607669 api_server.go:52] waiting for apiserver process to appear ...
	I1124 13:48:19.416833  607669 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:48:19.430267  607669 api_server.go:72] duration metric: took 14.90093273s to wait for apiserver process to appear ...
	I1124 13:48:19.430299  607669 api_server.go:88] waiting for apiserver healthz status ...
	I1124 13:48:19.430326  607669 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 13:48:19.436844  607669 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 13:48:19.438582  607669 api_server.go:141] control plane version: v1.28.0
	I1124 13:48:19.438618  607669 api_server.go:131] duration metric: took 8.311152ms to wait for apiserver health ...
	I1124 13:48:19.438632  607669 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 13:48:19.443134  607669 system_pods.go:59] 8 kube-system pods found
	I1124 13:48:19.443191  607669 system_pods.go:61] "coredns-5dd5756b68-b5rrl" [4e6c9b7c-5f0a-4c60-8197-20e985a07403] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:19.443200  607669 system_pods.go:61] "etcd-old-k8s-version-513442" [0b1a1913-a17b-4362-af66-49436a831759] Running
	I1124 13:48:19.443207  607669 system_pods.go:61] "kindnet-tpjvb" [c7df115a-8394-4f80-ac6c-5b1fc95337b5] Running
	I1124 13:48:19.443213  607669 system_pods.go:61] "kube-apiserver-old-k8s-version-513442" [722a96a1-58fb-4240-9c3b-4732b2fc0877] Running
	I1124 13:48:19.443219  607669 system_pods.go:61] "kube-controller-manager-old-k8s-version-513442" [df7953a7-c9cf-4854-b6bb-c43b0415e709] Running
	I1124 13:48:19.443225  607669 system_pods.go:61] "kube-proxy-hzfcx" [f4ba208a-1a78-46ae-9684-ff3309400852] Running
	I1124 13:48:19.443231  607669 system_pods.go:61] "kube-scheduler-old-k8s-version-513442" [c400bc97-a209-437d-ba96-60c58a4b8878] Running
	I1124 13:48:19.443240  607669 system_pods.go:61] "storage-provisioner" [65efb270-100a-4e7c-bee8-24de1df28586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:19.443248  607669 system_pods.go:74] duration metric: took 4.608559ms to wait for pod list to return data ...
	I1124 13:48:19.443260  607669 default_sa.go:34] waiting for default service account to be created ...
	I1124 13:48:19.446125  607669 default_sa.go:45] found service account: "default"
	I1124 13:48:19.446157  607669 default_sa.go:55] duration metric: took 2.890045ms for default service account to be created ...
	I1124 13:48:19.446170  607669 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 13:48:19.450324  607669 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:19.450375  607669 system_pods.go:89] "coredns-5dd5756b68-b5rrl" [4e6c9b7c-5f0a-4c60-8197-20e985a07403] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:19.450385  607669 system_pods.go:89] "etcd-old-k8s-version-513442" [0b1a1913-a17b-4362-af66-49436a831759] Running
	I1124 13:48:19.450394  607669 system_pods.go:89] "kindnet-tpjvb" [c7df115a-8394-4f80-ac6c-5b1fc95337b5] Running
	I1124 13:48:19.450408  607669 system_pods.go:89] "kube-apiserver-old-k8s-version-513442" [722a96a1-58fb-4240-9c3b-4732b2fc0877] Running
	I1124 13:48:19.450415  607669 system_pods.go:89] "kube-controller-manager-old-k8s-version-513442" [df7953a7-c9cf-4854-b6bb-c43b0415e709] Running
	I1124 13:48:19.450425  607669 system_pods.go:89] "kube-proxy-hzfcx" [f4ba208a-1a78-46ae-9684-ff3309400852] Running
	I1124 13:48:19.450434  607669 system_pods.go:89] "kube-scheduler-old-k8s-version-513442" [c400bc97-a209-437d-ba96-60c58a4b8878] Running
	I1124 13:48:19.450449  607669 system_pods.go:89] "storage-provisioner" [65efb270-100a-4e7c-bee8-24de1df28586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:19.450484  607669 retry.go:31] will retry after 306.547577ms: missing components: kube-dns
	I1124 13:48:19.761785  607669 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:19.761821  607669 system_pods.go:89] "coredns-5dd5756b68-b5rrl" [4e6c9b7c-5f0a-4c60-8197-20e985a07403] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:19.761828  607669 system_pods.go:89] "etcd-old-k8s-version-513442" [0b1a1913-a17b-4362-af66-49436a831759] Running
	I1124 13:48:19.761835  607669 system_pods.go:89] "kindnet-tpjvb" [c7df115a-8394-4f80-ac6c-5b1fc95337b5] Running
	I1124 13:48:19.761839  607669 system_pods.go:89] "kube-apiserver-old-k8s-version-513442" [722a96a1-58fb-4240-9c3b-4732b2fc0877] Running
	I1124 13:48:19.761843  607669 system_pods.go:89] "kube-controller-manager-old-k8s-version-513442" [df7953a7-c9cf-4854-b6bb-c43b0415e709] Running
	I1124 13:48:19.761846  607669 system_pods.go:89] "kube-proxy-hzfcx" [f4ba208a-1a78-46ae-9684-ff3309400852] Running
	I1124 13:48:19.761850  607669 system_pods.go:89] "kube-scheduler-old-k8s-version-513442" [c400bc97-a209-437d-ba96-60c58a4b8878] Running
	I1124 13:48:19.761855  607669 system_pods.go:89] "storage-provisioner" [65efb270-100a-4e7c-bee8-24de1df28586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:19.761871  607669 retry.go:31] will retry after 263.639636ms: missing components: kube-dns
	I1124 13:48:20.030723  607669 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:20.030764  607669 system_pods.go:89] "coredns-5dd5756b68-b5rrl" [4e6c9b7c-5f0a-4c60-8197-20e985a07403] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:20.030773  607669 system_pods.go:89] "etcd-old-k8s-version-513442" [0b1a1913-a17b-4362-af66-49436a831759] Running
	I1124 13:48:20.030781  607669 system_pods.go:89] "kindnet-tpjvb" [c7df115a-8394-4f80-ac6c-5b1fc95337b5] Running
	I1124 13:48:20.030787  607669 system_pods.go:89] "kube-apiserver-old-k8s-version-513442" [722a96a1-58fb-4240-9c3b-4732b2fc0877] Running
	I1124 13:48:20.030794  607669 system_pods.go:89] "kube-controller-manager-old-k8s-version-513442" [df7953a7-c9cf-4854-b6bb-c43b0415e709] Running
	I1124 13:48:20.030799  607669 system_pods.go:89] "kube-proxy-hzfcx" [f4ba208a-1a78-46ae-9684-ff3309400852] Running
	I1124 13:48:20.030804  607669 system_pods.go:89] "kube-scheduler-old-k8s-version-513442" [c400bc97-a209-437d-ba96-60c58a4b8878] Running
	I1124 13:48:20.030812  607669 system_pods.go:89] "storage-provisioner" [65efb270-100a-4e7c-bee8-24de1df28586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:20.030836  607669 retry.go:31] will retry after 485.23875ms: missing components: kube-dns
	I1124 13:48:16.203971  572647 cri.go:89] found id: ""
	I1124 13:48:16.204004  572647 logs.go:282] 0 containers: []
	W1124 13:48:16.204016  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:16.204025  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:16.204087  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:16.233087  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:16.233113  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:16.233119  572647 cri.go:89] found id: ""
	I1124 13:48:16.233130  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:16.233184  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.237937  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.242366  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:16.242450  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:16.273007  572647 cri.go:89] found id: ""
	I1124 13:48:16.273034  572647 logs.go:282] 0 containers: []
	W1124 13:48:16.273043  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:16.273049  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:16.273100  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:16.302483  572647 cri.go:89] found id: ""
	I1124 13:48:16.302518  572647 logs.go:282] 0 containers: []
	W1124 13:48:16.302537  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:16.302553  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:16.302575  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:16.360777  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:16.360817  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:16.391672  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:16.391700  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:16.490704  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:16.490743  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:16.530411  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:16.530448  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:16.567070  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:16.567107  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:16.601689  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:16.601728  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:16.646105  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:16.646143  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:16.682522  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:16.682560  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:16.699850  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:16.699887  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:16.759811  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:16.759835  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:16.759853  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:16.795013  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:16.795048  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:19.334057  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:19.334568  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:19.334661  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:19.334733  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:19.365714  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:19.365735  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:19.365739  572647 cri.go:89] found id: ""
	I1124 13:48:19.365747  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:19.365800  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.370354  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.374856  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:19.374992  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:19.405492  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:19.405519  572647 cri.go:89] found id: ""
	I1124 13:48:19.405529  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:19.405589  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.411364  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:19.411426  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:19.443360  572647 cri.go:89] found id: ""
	I1124 13:48:19.443391  572647 logs.go:282] 0 containers: []
	W1124 13:48:19.443404  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:19.443412  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:19.443477  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:19.475298  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:19.475324  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:19.475331  572647 cri.go:89] found id: ""
	I1124 13:48:19.475341  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:19.475407  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.480369  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.484782  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:19.484863  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:19.514622  572647 cri.go:89] found id: ""
	I1124 13:48:19.514666  572647 logs.go:282] 0 containers: []
	W1124 13:48:19.514716  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:19.514726  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:19.514807  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:19.550847  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:19.550872  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:19.550877  572647 cri.go:89] found id: ""
	I1124 13:48:19.550886  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:19.550963  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.556478  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.561320  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:19.561401  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:19.596190  572647 cri.go:89] found id: ""
	I1124 13:48:19.596226  572647 logs.go:282] 0 containers: []
	W1124 13:48:19.596238  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:19.596247  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:19.596309  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:19.627382  572647 cri.go:89] found id: ""
	I1124 13:48:19.627413  572647 logs.go:282] 0 containers: []
	W1124 13:48:19.627424  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:19.627436  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:19.627452  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:19.694796  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:19.694836  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:19.752858  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:19.752896  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:19.788182  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:19.788224  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:19.879216  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:19.879255  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:19.940757  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:19.940776  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:19.940790  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:19.979681  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:19.979726  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:20.020042  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:20.020085  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:20.064463  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:20.064499  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:20.098012  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:20.098044  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:20.132122  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:20.132157  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:20.148958  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:20.148997  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:20.521094  607669 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:20.521123  607669 system_pods.go:89] "coredns-5dd5756b68-b5rrl" [4e6c9b7c-5f0a-4c60-8197-20e985a07403] Running
	I1124 13:48:20.521130  607669 system_pods.go:89] "etcd-old-k8s-version-513442" [0b1a1913-a17b-4362-af66-49436a831759] Running
	I1124 13:48:20.521133  607669 system_pods.go:89] "kindnet-tpjvb" [c7df115a-8394-4f80-ac6c-5b1fc95337b5] Running
	I1124 13:48:20.521137  607669 system_pods.go:89] "kube-apiserver-old-k8s-version-513442" [722a96a1-58fb-4240-9c3b-4732b2fc0877] Running
	I1124 13:48:20.521141  607669 system_pods.go:89] "kube-controller-manager-old-k8s-version-513442" [df7953a7-c9cf-4854-b6bb-c43b0415e709] Running
	I1124 13:48:20.521145  607669 system_pods.go:89] "kube-proxy-hzfcx" [f4ba208a-1a78-46ae-9684-ff3309400852] Running
	I1124 13:48:20.521148  607669 system_pods.go:89] "kube-scheduler-old-k8s-version-513442" [c400bc97-a209-437d-ba96-60c58a4b8878] Running
	I1124 13:48:20.521151  607669 system_pods.go:89] "storage-provisioner" [65efb270-100a-4e7c-bee8-24de1df28586] Running
	I1124 13:48:20.521159  607669 system_pods.go:126] duration metric: took 1.074982882s to wait for k8s-apps to be running ...
	I1124 13:48:20.521166  607669 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 13:48:20.521215  607669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:48:20.535666  607669 system_svc.go:56] duration metric: took 14.486184ms WaitForService to wait for kubelet
	I1124 13:48:20.535706  607669 kubeadm.go:587] duration metric: took 16.006375183s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:48:20.535732  607669 node_conditions.go:102] verifying NodePressure condition ...
	I1124 13:48:20.538619  607669 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 13:48:20.538646  607669 node_conditions.go:123] node cpu capacity is 8
	I1124 13:48:20.538662  607669 node_conditions.go:105] duration metric: took 2.9245ms to run NodePressure ...
	I1124 13:48:20.538676  607669 start.go:242] waiting for startup goroutines ...
	I1124 13:48:20.538683  607669 start.go:247] waiting for cluster config update ...
	I1124 13:48:20.538693  607669 start.go:256] writing updated cluster config ...
	I1124 13:48:20.539040  607669 ssh_runner.go:195] Run: rm -f paused
	I1124 13:48:20.543325  607669 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:48:20.547793  607669 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-b5rrl" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.552447  607669 pod_ready.go:94] pod "coredns-5dd5756b68-b5rrl" is "Ready"
	I1124 13:48:20.552472  607669 pod_ready.go:86] duration metric: took 4.651627ms for pod "coredns-5dd5756b68-b5rrl" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.556328  607669 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.561689  607669 pod_ready.go:94] pod "etcd-old-k8s-version-513442" is "Ready"
	I1124 13:48:20.561717  607669 pod_ready.go:86] duration metric: took 5.363766ms for pod "etcd-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.564634  607669 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.569265  607669 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-513442" is "Ready"
	I1124 13:48:20.569291  607669 pod_ready.go:86] duration metric: took 4.631558ms for pod "kube-apiserver-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.572304  607669 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.948397  607669 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-513442" is "Ready"
	I1124 13:48:20.948423  607669 pod_ready.go:86] duration metric: took 376.095956ms for pod "kube-controller-manager-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:21.148648  607669 pod_ready.go:83] waiting for pod "kube-proxy-hzfcx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:21.548255  607669 pod_ready.go:94] pod "kube-proxy-hzfcx" is "Ready"
	I1124 13:48:21.548288  607669 pod_ready.go:86] duration metric: took 399.608636ms for pod "kube-proxy-hzfcx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:21.748744  607669 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:22.147789  607669 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-513442" is "Ready"
	I1124 13:48:22.147821  607669 pod_ready.go:86] duration metric: took 399.0528ms for pod "kube-scheduler-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:22.147833  607669 pod_ready.go:40] duration metric: took 1.604464617s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:48:22.193883  607669 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 13:48:22.196207  607669 out.go:203] 
	W1124 13:48:22.197964  607669 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 13:48:22.199516  607669 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 13:48:22.201541  607669 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-513442" cluster and "default" namespace by default
	W1124 13:48:20.947014  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	W1124 13:48:22.948554  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	I1124 13:48:24.446130  608917 node_ready.go:49] node "no-preload-608395" is "Ready"
	I1124 13:48:24.446168  608917 node_ready.go:38] duration metric: took 14.503611427s for node "no-preload-608395" to be "Ready" ...
	I1124 13:48:24.446195  608917 api_server.go:52] waiting for apiserver process to appear ...
	I1124 13:48:24.446254  608917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:48:24.460952  608917 api_server.go:72] duration metric: took 14.82264088s to wait for apiserver process to appear ...
	I1124 13:48:24.460990  608917 api_server.go:88] waiting for apiserver healthz status ...
	I1124 13:48:24.461021  608917 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 13:48:24.466768  608917 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1124 13:48:24.468117  608917 api_server.go:141] control plane version: v1.34.1
	I1124 13:48:24.468151  608917 api_server.go:131] duration metric: took 7.151862ms to wait for apiserver health ...
	I1124 13:48:24.468164  608917 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 13:48:24.473836  608917 system_pods.go:59] 8 kube-system pods found
	I1124 13:48:24.473891  608917 system_pods.go:61] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:24.473901  608917 system_pods.go:61] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:24.473965  608917 system_pods.go:61] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:24.473980  608917 system_pods.go:61] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:24.473987  608917 system_pods.go:61] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:24.473995  608917 system_pods.go:61] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:24.474001  608917 system_pods.go:61] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:24.474011  608917 system_pods.go:61] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:24.474025  608917 system_pods.go:74] duration metric: took 5.853076ms to wait for pod list to return data ...
	I1124 13:48:24.474037  608917 default_sa.go:34] waiting for default service account to be created ...
	I1124 13:48:24.476681  608917 default_sa.go:45] found service account: "default"
	I1124 13:48:24.476712  608917 default_sa.go:55] duration metric: took 2.661232ms for default service account to be created ...
	I1124 13:48:24.476724  608917 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 13:48:24.479715  608917 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:24.479757  608917 system_pods.go:89] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:24.479765  608917 system_pods.go:89] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:24.479776  608917 system_pods.go:89] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:24.479782  608917 system_pods.go:89] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:24.479788  608917 system_pods.go:89] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:24.479793  608917 system_pods.go:89] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:24.479798  608917 system_pods.go:89] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:24.479806  608917 system_pods.go:89] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:24.479831  608917 retry.go:31] will retry after 257.034103ms: missing components: kube-dns
	I1124 13:48:24.740811  608917 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:24.740842  608917 system_pods.go:89] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:24.740848  608917 system_pods.go:89] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:24.740854  608917 system_pods.go:89] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:24.740858  608917 system_pods.go:89] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:24.740863  608917 system_pods.go:89] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:24.740866  608917 system_pods.go:89] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:24.740869  608917 system_pods.go:89] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:24.740876  608917 system_pods.go:89] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:24.740892  608917 retry.go:31] will retry after 244.335921ms: missing components: kube-dns
	I1124 13:48:24.989021  608917 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:24.989054  608917 system_pods.go:89] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:24.989061  608917 system_pods.go:89] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:24.989067  608917 system_pods.go:89] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:24.989072  608917 system_pods.go:89] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:24.989077  608917 system_pods.go:89] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:24.989080  608917 system_pods.go:89] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:24.989084  608917 system_pods.go:89] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:24.989089  608917 system_pods.go:89] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:24.989104  608917 retry.go:31] will retry after 431.238044ms: missing components: kube-dns
	I1124 13:48:22.686011  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:22.686450  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:22.686506  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:22.686563  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:22.718842  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:22.718868  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:22.718874  572647 cri.go:89] found id: ""
	I1124 13:48:22.718885  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:22.719025  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.724051  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.728627  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:22.728697  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:22.758279  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:22.758305  572647 cri.go:89] found id: ""
	I1124 13:48:22.758315  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:22.758378  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.762905  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:22.763025  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:22.796176  572647 cri.go:89] found id: ""
	I1124 13:48:22.796207  572647 logs.go:282] 0 containers: []
	W1124 13:48:22.796218  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:22.796227  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:22.796293  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:22.828770  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:22.828801  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:22.828815  572647 cri.go:89] found id: ""
	I1124 13:48:22.828827  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:22.828886  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.833530  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.837668  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:22.837750  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:22.867760  572647 cri.go:89] found id: ""
	I1124 13:48:22.867793  572647 logs.go:282] 0 containers: []
	W1124 13:48:22.867806  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:22.867815  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:22.867976  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:22.899275  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:22.899305  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:22.899312  572647 cri.go:89] found id: ""
	I1124 13:48:22.899327  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:22.899391  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.903859  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.908121  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:22.908190  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:22.938883  572647 cri.go:89] found id: ""
	I1124 13:48:22.938961  572647 logs.go:282] 0 containers: []
	W1124 13:48:22.938972  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:22.938980  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:22.939033  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:22.969840  572647 cri.go:89] found id: ""
	I1124 13:48:22.969864  572647 logs.go:282] 0 containers: []
	W1124 13:48:22.969872  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:22.969882  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:22.969903  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:23.031386  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:23.031411  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:23.031425  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:23.067770  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:23.067805  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:23.104851  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:23.104886  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:23.160621  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:23.160668  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:23.190994  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:23.191026  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:23.226509  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:23.226542  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:23.269082  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:23.269130  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:23.360572  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:23.360613  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:23.399049  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:23.399089  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:23.440241  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:23.440282  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:23.474172  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:23.474212  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:25.992569  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:25.993167  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:25.993241  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:25.993310  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:26.021789  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:26.021816  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:26.021823  572647 cri.go:89] found id: ""
	I1124 13:48:26.021834  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:26.021985  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.027084  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.031267  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:26.031350  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:26.063349  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:26.063379  572647 cri.go:89] found id: ""
	I1124 13:48:26.063390  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:26.063448  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.068064  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:26.068140  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:26.096106  572647 cri.go:89] found id: ""
	I1124 13:48:26.096148  572647 logs.go:282] 0 containers: []
	W1124 13:48:26.096158  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:26.096165  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:26.096220  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:26.126156  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:26.126186  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:26.126193  572647 cri.go:89] found id: ""
	I1124 13:48:26.126205  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:26.126275  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.131369  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.135595  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:26.135657  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:26.163133  572647 cri.go:89] found id: ""
	I1124 13:48:26.163161  572647 logs.go:282] 0 containers: []
	W1124 13:48:26.163169  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:26.163187  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:26.163244  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:26.192355  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:26.192378  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:26.192384  572647 cri.go:89] found id: ""
	I1124 13:48:26.192394  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:26.192549  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.197316  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:25.424597  608917 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:25.424631  608917 system_pods.go:89] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:25.424636  608917 system_pods.go:89] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:25.424642  608917 system_pods.go:89] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:25.424646  608917 system_pods.go:89] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:25.424650  608917 system_pods.go:89] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:25.424653  608917 system_pods.go:89] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:25.424656  608917 system_pods.go:89] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:25.424663  608917 system_pods.go:89] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:25.424679  608917 retry.go:31] will retry after 458.014987ms: missing components: kube-dns
	I1124 13:48:25.886603  608917 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:25.886633  608917 system_pods.go:89] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Running
	I1124 13:48:25.886641  608917 system_pods.go:89] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:25.886644  608917 system_pods.go:89] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:25.886649  608917 system_pods.go:89] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:25.886653  608917 system_pods.go:89] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:25.886657  608917 system_pods.go:89] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:25.886660  608917 system_pods.go:89] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:25.886663  608917 system_pods.go:89] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Running
	I1124 13:48:25.886671  608917 system_pods.go:126] duration metric: took 1.409940532s to wait for k8s-apps to be running ...
	I1124 13:48:25.886680  608917 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 13:48:25.886726  608917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:48:25.901294  608917 system_svc.go:56] duration metric: took 14.604723ms WaitForService to wait for kubelet
	I1124 13:48:25.901324  608917 kubeadm.go:587] duration metric: took 16.26302303s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:48:25.901343  608917 node_conditions.go:102] verifying NodePressure condition ...
	I1124 13:48:25.904190  608917 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 13:48:25.904219  608917 node_conditions.go:123] node cpu capacity is 8
	I1124 13:48:25.904234  608917 node_conditions.go:105] duration metric: took 2.88688ms to run NodePressure ...
	I1124 13:48:25.904249  608917 start.go:242] waiting for startup goroutines ...
	I1124 13:48:25.904256  608917 start.go:247] waiting for cluster config update ...
	I1124 13:48:25.904266  608917 start.go:256] writing updated cluster config ...
	I1124 13:48:25.904560  608917 ssh_runner.go:195] Run: rm -f paused
	I1124 13:48:25.909215  608917 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:48:25.912986  608917 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rcf8v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.917301  608917 pod_ready.go:94] pod "coredns-66bc5c9577-rcf8v" is "Ready"
	I1124 13:48:25.917324  608917 pod_ready.go:86] duration metric: took 4.297309ms for pod "coredns-66bc5c9577-rcf8v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.919442  608917 pod_ready.go:83] waiting for pod "etcd-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.923976  608917 pod_ready.go:94] pod "etcd-no-preload-608395" is "Ready"
	I1124 13:48:25.923999  608917 pod_ready.go:86] duration metric: took 4.535115ms for pod "etcd-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.926003  608917 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.930385  608917 pod_ready.go:94] pod "kube-apiserver-no-preload-608395" is "Ready"
	I1124 13:48:25.930413  608917 pod_ready.go:86] duration metric: took 4.382406ms for pod "kube-apiserver-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.932261  608917 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:26.313581  608917 pod_ready.go:94] pod "kube-controller-manager-no-preload-608395" is "Ready"
	I1124 13:48:26.313615  608917 pod_ready.go:86] duration metric: took 381.333887ms for pod "kube-controller-manager-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:26.514064  608917 pod_ready.go:83] waiting for pod "kube-proxy-5vj5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:26.913664  608917 pod_ready.go:94] pod "kube-proxy-5vj5p" is "Ready"
	I1124 13:48:26.913702  608917 pod_ready.go:86] duration metric: took 399.60223ms for pod "kube-proxy-5vj5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:27.114488  608917 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:27.514056  608917 pod_ready.go:94] pod "kube-scheduler-no-preload-608395" is "Ready"
	I1124 13:48:27.514084  608917 pod_ready.go:86] duration metric: took 399.56934ms for pod "kube-scheduler-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:27.514098  608917 pod_ready.go:40] duration metric: took 1.604847792s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:48:27.561310  608917 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 13:48:27.563544  608917 out.go:179] * Done! kubectl is now configured to use "no-preload-608395" cluster and "default" namespace by default
	I1124 13:48:26.202352  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:26.202439  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:26.231899  572647 cri.go:89] found id: ""
	I1124 13:48:26.231953  572647 logs.go:282] 0 containers: []
	W1124 13:48:26.231964  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:26.231973  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:26.232040  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:26.263417  572647 cri.go:89] found id: ""
	I1124 13:48:26.263446  572647 logs.go:282] 0 containers: []
	W1124 13:48:26.263459  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:26.263473  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:26.263488  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:26.354230  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:26.354265  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:26.389608  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:26.389652  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:26.427040  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:26.427077  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:26.466568  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:26.466603  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:26.503710  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:26.503749  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:26.539150  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:26.539193  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:26.583782  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:26.583825  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:26.617656  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:26.617696  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:26.634777  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:26.634809  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:26.693534  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:26.693559  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:26.693577  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:26.748627  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:26.748668  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:29.280171  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:29.280640  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:29.280694  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:29.280748  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:29.309613  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:29.309638  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:29.309644  572647 cri.go:89] found id: ""
	I1124 13:48:29.309660  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:29.309730  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.314623  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.319864  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:29.319962  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:29.348671  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:29.348699  572647 cri.go:89] found id: ""
	I1124 13:48:29.348709  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:29.348775  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.353662  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:29.353728  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:29.383017  572647 cri.go:89] found id: ""
	I1124 13:48:29.383046  572647 logs.go:282] 0 containers: []
	W1124 13:48:29.383058  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:29.383066  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:29.383121  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:29.411238  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:29.411259  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:29.411264  572647 cri.go:89] found id: ""
	I1124 13:48:29.411271  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:29.411325  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.415976  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.420189  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:29.420264  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:29.449856  572647 cri.go:89] found id: ""
	I1124 13:48:29.449890  572647 logs.go:282] 0 containers: []
	W1124 13:48:29.449921  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:29.449929  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:29.450001  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:29.480136  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:29.480164  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:29.480171  572647 cri.go:89] found id: ""
	I1124 13:48:29.480181  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:29.480258  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.484998  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.489433  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:29.489504  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:29.519804  572647 cri.go:89] found id: ""
	I1124 13:48:29.519841  572647 logs.go:282] 0 containers: []
	W1124 13:48:29.519854  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:29.519864  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:29.520048  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:29.549935  572647 cri.go:89] found id: ""
	I1124 13:48:29.549964  572647 logs.go:282] 0 containers: []
	W1124 13:48:29.549974  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:29.549986  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:29.549997  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:29.593521  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:29.593560  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:29.681751  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:29.681792  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:29.699198  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:29.699232  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:29.759823  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:29.759850  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:29.759863  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:29.798497  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:29.798534  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:29.835677  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:29.835718  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:29.864876  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:29.864923  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:29.898153  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:29.898186  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:29.932035  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:29.932073  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:29.971224  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:29.971258  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:30.026576  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:30.026619  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:32.561313  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:32.561791  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:32.561844  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:32.561894  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:32.598025  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:32.598050  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:32.598056  572647 cri.go:89] found id: ""
	I1124 13:48:32.598068  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:32.598133  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:32.602725  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:32.607141  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:32.607216  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:32.640836  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:32.640865  572647 cri.go:89] found id: ""
	I1124 13:48:32.640875  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:32.640954  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:32.646056  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:32.646126  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:32.674729  572647 cri.go:89] found id: ""
	I1124 13:48:32.674762  572647 logs.go:282] 0 containers: []
	W1124 13:48:32.674774  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:32.674782  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:32.674838  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:32.704017  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:32.704038  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:32.704042  572647 cri.go:89] found id: ""
	I1124 13:48:32.704051  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:32.704116  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:32.708425  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:32.712411  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:32.712479  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:32.740588  572647 cri.go:89] found id: ""
	I1124 13:48:32.740618  572647 logs.go:282] 0 containers: []
	W1124 13:48:32.740630  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:32.740638  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:32.740694  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:32.771592  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:32.771619  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:32.771624  572647 cri.go:89] found id: ""
	I1124 13:48:32.771632  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:32.771695  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:32.776594  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:32.781774  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:32.781857  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:32.821617  572647 cri.go:89] found id: ""
	I1124 13:48:32.821644  572647 logs.go:282] 0 containers: []
	W1124 13:48:32.821654  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:32.821662  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:32.821727  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:32.853528  572647 cri.go:89] found id: ""
	I1124 13:48:32.853552  572647 logs.go:282] 0 containers: []
	W1124 13:48:32.853560  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:32.853571  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:32.853587  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:32.894116  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:32.894152  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:32.928183  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:32.928225  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:32.963902  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:32.963954  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:33.080028  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:33.080059  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:33.151516  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:33.151543  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:33.151560  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:33.190611  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:33.190648  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:33.230177  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:33.230211  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:33.264707  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:33.264740  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:33.313312  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:33.313352  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:33.332374  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:33.332404  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:33.374521  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:33.374570  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:35.931383  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:35.932010  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:35.932066  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:35.932129  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:35.963379  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:35.963406  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:35.963411  572647 cri.go:89] found id: ""
	I1124 13:48:35.963421  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:35.963545  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:35.968069  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:35.972536  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:35.972616  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:36.003944  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:36.003968  572647 cri.go:89] found id: ""
	I1124 13:48:36.003977  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:36.004038  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:36.009309  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:36.009386  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:36.041126  572647 cri.go:89] found id: ""
	I1124 13:48:36.041174  572647 logs.go:282] 0 containers: []
	W1124 13:48:36.041185  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:36.041193  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:36.041318  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:36.072529  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:36.072546  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:36.072550  572647 cri.go:89] found id: ""
	I1124 13:48:36.072558  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:36.072610  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:36.077016  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:36.081328  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:36.081405  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:36.113279  572647 cri.go:89] found id: ""
	I1124 13:48:36.113310  572647 logs.go:282] 0 containers: []
	W1124 13:48:36.113322  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:36.113330  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:36.113390  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:36.146515  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:36.146542  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:36.146546  572647 cri.go:89] found id: ""
	I1124 13:48:36.146554  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:36.146614  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:36.151049  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:36.155578  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:36.155658  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:36.186139  572647 cri.go:89] found id: ""
	I1124 13:48:36.186164  572647 logs.go:282] 0 containers: []
	W1124 13:48:36.186175  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:36.186192  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:36.186260  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a87ce53f9a53a       56cc512116c8f       7 seconds ago       Running             busybox                   0                   abf634e42c234       busybox                                     default
	bf18342d6713e       52546a367cc9e       13 seconds ago      Running             coredns                   0                   6d8fde1010af0       coredns-66bc5c9577-rcf8v                    kube-system
	8507f470f3a86       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   38a319be0b79a       storage-provisioner                         kube-system
	2ea97fe407516       409467f978b4a       24 seconds ago      Running             kindnet-cni               0                   85152a5b82a56       kindnet-zqlgn                               kube-system
	9ddb50f35d3b7       fc25172553d79       27 seconds ago      Running             kube-proxy                0                   91198ed5eb4e3       kube-proxy-5vj5p                            kube-system
	f1e57ae5fc13d       7dd6aaa1717ab       38 seconds ago      Running             kube-scheduler            0                   85dfcbe134545       kube-scheduler-no-preload-608395            kube-system
	e0125ce665aa9       c80c8dbafe7dd       38 seconds ago      Running             kube-controller-manager   0                   f701193b00cde       kube-controller-manager-no-preload-608395   kube-system
	d82cad123b411       c3994bc696102       38 seconds ago      Running             kube-apiserver            0                   0000dcbeea4e5       kube-apiserver-no-preload-608395            kube-system
	dc4089699d63b       5f1f5298c888d       38 seconds ago      Running             etcd                      0                   b817a80ccfbeb       etcd-no-preload-608395                      kube-system
	
	
	==> containerd <==
	Nov 24 13:48:24 no-preload-608395 containerd[663]: time="2025-11-24T13:48:24.510060828Z" level=info msg="CreateContainer within sandbox \"38a319be0b79ad5175957c7dc1e582e7edb89c9e37f58b06f9f0994f04874bc8\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"8507f470f3a86296f0a16f1905dfcfc9305722b3ab50ce2e78a13d8f8acddb22\""
	Nov 24 13:48:24 no-preload-608395 containerd[663]: time="2025-11-24T13:48:24.510624936Z" level=info msg="StartContainer for \"8507f470f3a86296f0a16f1905dfcfc9305722b3ab50ce2e78a13d8f8acddb22\""
	Nov 24 13:48:24 no-preload-608395 containerd[663]: time="2025-11-24T13:48:24.511676866Z" level=info msg="connecting to shim 8507f470f3a86296f0a16f1905dfcfc9305722b3ab50ce2e78a13d8f8acddb22" address="unix:///run/containerd/s/143ca10fd90c5cb4c30fdb00eed55a198510d11174be676001637e238c916be7" protocol=ttrpc version=3
	Nov 24 13:48:24 no-preload-608395 containerd[663]: time="2025-11-24T13:48:24.517822696Z" level=info msg="Container bf18342d6713eac5d830a361ceb568e559a479f96c8273418cde044492ec70a3: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 13:48:24 no-preload-608395 containerd[663]: time="2025-11-24T13:48:24.527617577Z" level=info msg="CreateContainer within sandbox \"6d8fde1010af0dbd838e4fd22a1362c81137d2db72e7d0d908443a54202b5c9a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bf18342d6713eac5d830a361ceb568e559a479f96c8273418cde044492ec70a3\""
	Nov 24 13:48:24 no-preload-608395 containerd[663]: time="2025-11-24T13:48:24.528169702Z" level=info msg="StartContainer for \"bf18342d6713eac5d830a361ceb568e559a479f96c8273418cde044492ec70a3\""
	Nov 24 13:48:24 no-preload-608395 containerd[663]: time="2025-11-24T13:48:24.529084275Z" level=info msg="connecting to shim bf18342d6713eac5d830a361ceb568e559a479f96c8273418cde044492ec70a3" address="unix:///run/containerd/s/f028d04a185d6c9abe51092264b3e9e3162f4ccb61a33ad1b0cea00c1641b6e7" protocol=ttrpc version=3
	Nov 24 13:48:24 no-preload-608395 containerd[663]: time="2025-11-24T13:48:24.577131132Z" level=info msg="StartContainer for \"8507f470f3a86296f0a16f1905dfcfc9305722b3ab50ce2e78a13d8f8acddb22\" returns successfully"
	Nov 24 13:48:24 no-preload-608395 containerd[663]: time="2025-11-24T13:48:24.580306824Z" level=info msg="StartContainer for \"bf18342d6713eac5d830a361ceb568e559a479f96c8273418cde044492ec70a3\" returns successfully"
	Nov 24 13:48:28 no-preload-608395 containerd[663]: time="2025-11-24T13:48:28.016567907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:e09b20ec-b541-4478-9c67-c55b56ae8991,Namespace:default,Attempt:0,}"
	Nov 24 13:48:28 no-preload-608395 containerd[663]: time="2025-11-24T13:48:28.064047464Z" level=info msg="connecting to shim abf634e42c2348d9a3ac22d10e9756399d18ae0c0881e113e0b4034d8a76cb69" address="unix:///run/containerd/s/14d2a9716e984eb84752432f4df0d00c8f88a0426d6c135abeced2b7e10bbbaa" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 13:48:28 no-preload-608395 containerd[663]: time="2025-11-24T13:48:28.143114394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:e09b20ec-b541-4478-9c67-c55b56ae8991,Namespace:default,Attempt:0,} returns sandbox id \"abf634e42c2348d9a3ac22d10e9756399d18ae0c0881e113e0b4034d8a76cb69\""
	Nov 24 13:48:28 no-preload-608395 containerd[663]: time="2025-11-24T13:48:28.145079667Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.302216067Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.303276199Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396646"
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.304933339Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.307230621Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.307725020Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.162597076s"
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.307769131Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.312074703Z" level=info msg="CreateContainer within sandbox \"abf634e42c2348d9a3ac22d10e9756399d18ae0c0881e113e0b4034d8a76cb69\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.321536374Z" level=info msg="Container a87ce53f9a53a7e121b33fc1ab6bcf6a0671080a167fc5db54f42daa27b3b54e: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.328173630Z" level=info msg="CreateContainer within sandbox \"abf634e42c2348d9a3ac22d10e9756399d18ae0c0881e113e0b4034d8a76cb69\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"a87ce53f9a53a7e121b33fc1ab6bcf6a0671080a167fc5db54f42daa27b3b54e\""
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.329029778Z" level=info msg="StartContainer for \"a87ce53f9a53a7e121b33fc1ab6bcf6a0671080a167fc5db54f42daa27b3b54e\""
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.329901108Z" level=info msg="connecting to shim a87ce53f9a53a7e121b33fc1ab6bcf6a0671080a167fc5db54f42daa27b3b54e" address="unix:///run/containerd/s/14d2a9716e984eb84752432f4df0d00c8f88a0426d6c135abeced2b7e10bbbaa" protocol=ttrpc version=3
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.393866048Z" level=info msg="StartContainer for \"a87ce53f9a53a7e121b33fc1ab6bcf6a0671080a167fc5db54f42daa27b3b54e\" returns successfully"
	
	
	==> coredns [bf18342d6713eac5d830a361ceb568e559a479f96c8273418cde044492ec70a3] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33889 - 60274 "HINFO IN 308682473451809031.9053382920724870437. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.018878381s
	
	
	==> describe nodes <==
	Name:               no-preload-608395
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-608395
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=no-preload-608395
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_48_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:48:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-608395
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:48:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:48:35 +0000   Mon, 24 Nov 2025 13:48:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:48:35 +0000   Mon, 24 Nov 2025 13:48:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:48:35 +0000   Mon, 24 Nov 2025 13:48:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:48:35 +0000   Mon, 24 Nov 2025 13:48:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-608395
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                320731f7-0f66-4c7b-bb73-4a2704cad18d
	  Boot ID:                    715d4626-373f-499b-b5de-b6d832ce4fe4
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-rcf8v                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-no-preload-608395                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         36s
	  kube-system                 kindnet-zqlgn                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-no-preload-608395             250m (3%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-no-preload-608395    200m (2%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-5vj5p                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-no-preload-608395             100m (1%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 34s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  34s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  34s   kubelet          Node no-preload-608395 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s   kubelet          Node no-preload-608395 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s   kubelet          Node no-preload-608395 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s   node-controller  Node no-preload-608395 event: Registered Node no-preload-608395 in Controller
	  Normal  NodeReady                14s   kubelet          Node no-preload-608395 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 91 30 bc 58 af 08 06
	[Nov24 12:45] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a fb 84 7d 9e 9e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0a 91 30 bc 58 af 08 06
	[ +25.292047] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff da 14 b4 9b 3e 8f 08 06
	[  +0.024207] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 06 8e 71 0b 76 c3 08 06
	[ +16.768103] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 45 b6 ad fe 93 08 06
	[  +5.950770] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e b5 4a 70 0a 35 08 06
	[Nov24 12:46] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e 8b d0 4a da 7e 08 06
	[  +0.000557] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e b5 4a 70 0a 35 08 06
	[  +1.903671] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 1f e8 fc 59 74 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da 14 b4 9b 3e 8f 08 06
	[ +17.535584] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 31 ec 7c 1d 38 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 45 b6 ad fe 93 08 06
	
	
	==> etcd [dc4089699d63b1ebefa2ca4daebfcf11cd7227a50a1e6e1b2289c4b80616887b] <==
	{"level":"warn","ts":"2025-11-24T13:48:00.880768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.888650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.897590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.907577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.914266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.921173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.934065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.940316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.953688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.960197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.967051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.974729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.988889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:01.013686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:01.021343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:01.028137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:01.079446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37244","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T13:48:02.624508Z","caller":"traceutil/trace.go:172","msg":"trace[1781641608] linearizableReadLoop","detail":"{readStateIndex:72; appliedIndex:72; }","duration":"110.051169ms","start":"2025-11-24T13:48:02.514411Z","end":"2025-11-24T13:48:02.624462Z","steps":["trace[1781641608] 'read index received'  (duration: 110.044712ms)","trace[1781641608] 'applied index is now lower than readState.Index'  (duration: 5.647µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T13:48:02.673029Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.028533ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-24T13:48:02.673105Z","caller":"traceutil/trace.go:172","msg":"trace[707023611] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:69; }","duration":"137.102091ms","start":"2025-11-24T13:48:02.535985Z","end":"2025-11-24T13:48:02.673087Z","steps":["trace[707023611] 'agreement among raft nodes before linearized reading'  (duration: 137.004371ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T13:48:02.673142Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.494312ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-24T13:48:02.673176Z","caller":"traceutil/trace.go:172","msg":"trace[152867856] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:69; }","duration":"129.535239ms","start":"2025-11-24T13:48:02.543628Z","end":"2025-11-24T13:48:02.673163Z","steps":["trace[152867856] 'agreement among raft nodes before linearized reading'  (duration: 129.454766ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T13:48:02.672887Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.459223ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-admin\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-24T13:48:02.673288Z","caller":"traceutil/trace.go:172","msg":"trace[1228496030] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-admin; range_end:; response_count:0; response_revision:68; }","duration":"158.883515ms","start":"2025-11-24T13:48:02.514391Z","end":"2025-11-24T13:48:02.673274Z","steps":["trace[1228496030] 'agreement among raft nodes before linearized reading'  (duration: 110.197209ms)","trace[1228496030] 'range keys from in-memory index tree'  (duration: 48.211849ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T13:48:02.673056Z","caller":"traceutil/trace.go:172","msg":"trace[717417019] transaction","detail":"{read_only:false; response_revision:69; number_of_response:1; }","duration":"159.871089ms","start":"2025-11-24T13:48:02.513138Z","end":"2025-11-24T13:48:02.673009Z","steps":["trace[717417019] 'process raft request'  (duration: 111.381018ms)","trace[717417019] 'compare'  (duration: 48.311068ms)"],"step_count":2}
	
	
	==> kernel <==
	 13:48:38 up  2:30,  0 user,  load average: 2.03, 2.80, 1.92
	Linux no-preload-608395 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2ea97fe407516fa684fa4c2e7ad02af95ea220afac279014e4b4e3fe4dff2140] <==
	I1124 13:48:13.811405       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 13:48:13.811705       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1124 13:48:13.811879       1 main.go:148] setting mtu 1500 for CNI 
	I1124 13:48:13.811899       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 13:48:13.811974       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T13:48:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 13:48:14.016296       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 13:48:14.108095       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 13:48:14.207539       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 13:48:14.207904       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 13:48:14.608274       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 13:48:14.608309       1 metrics.go:72] Registering metrics
	I1124 13:48:14.608385       1 controller.go:711] "Syncing nftables rules"
	I1124 13:48:24.023180       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 13:48:24.023253       1 main.go:301] handling current node
	I1124 13:48:34.017224       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 13:48:34.017265       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d82cad123b4115bcd48ca1660a95b3679527efeba0bced6899fbfd61163285fe] <==
	I1124 13:48:01.533803       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 13:48:01.534806       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 13:48:01.539654       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 13:48:01.548173       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 13:48:01.548340       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 13:48:01.561341       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:48:01.562153       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 13:48:02.489855       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 13:48:02.674429       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 13:48:02.674534       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 13:48:03.220273       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 13:48:03.262189       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 13:48:03.341712       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 13:48:03.348882       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1124 13:48:03.350044       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 13:48:03.354460       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 13:48:03.475714       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 13:48:04.567992       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 13:48:04.589259       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 13:48:04.601283       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 13:48:09.228819       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 13:48:09.278836       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 13:48:09.430563       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:48:09.435571       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1124 13:48:36.837064       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:55596: use of closed network connection
	
	
	==> kube-controller-manager [e0125ce665aa93a74314d6f23ea2fab5491134c5aacd08baba2eb4d66c850e3c] <==
	I1124 13:48:08.442655       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 13:48:08.449990       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 13:48:08.457410       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 13:48:08.473049       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 13:48:08.473106       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:48:08.473123       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 13:48:08.473131       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 13:48:08.473696       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 13:48:08.474102       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 13:48:08.474184       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 13:48:08.474294       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-608395"
	I1124 13:48:08.474342       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 13:48:08.474589       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 13:48:08.475058       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 13:48:08.475159       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 13:48:08.475215       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 13:48:08.475226       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 13:48:08.475443       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 13:48:08.475540       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 13:48:08.475941       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 13:48:08.475969       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 13:48:08.475996       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 13:48:08.481156       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:48:08.504046       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:48:28.478220       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9ddb50f35d3b70a8df49aa4b5877775ec4126034cc94e6932e87b579184a5c1e] <==
	I1124 13:48:10.412732       1 server_linux.go:53] "Using iptables proxy"
	I1124 13:48:10.487102       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 13:48:10.588152       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 13:48:10.588196       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1124 13:48:10.588320       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 13:48:10.611310       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:48:10.611377       1 server_linux.go:132] "Using iptables Proxier"
	I1124 13:48:10.617651       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 13:48:10.618063       1 server.go:527] "Version info" version="v1.34.1"
	I1124 13:48:10.618091       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:48:10.619529       1 config.go:200] "Starting service config controller"
	I1124 13:48:10.619571       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 13:48:10.619634       1 config.go:309] "Starting node config controller"
	I1124 13:48:10.619944       1 config.go:106] "Starting endpoint slice config controller"
	I1124 13:48:10.620046       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 13:48:10.620078       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 13:48:10.619618       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 13:48:10.620120       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 13:48:10.719772       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 13:48:10.720304       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 13:48:10.720333       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 13:48:10.720355       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [f1e57ae5fc13de600be37e1d97249746f65ecb876d4354e85073ed623a64ef5c] <==
	E1124 13:48:01.491129       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:48:01.491188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:48:01.491203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 13:48:01.491258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:48:01.491260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 13:48:01.491367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:48:02.309331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:48:02.355280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 13:48:02.452183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:48:02.607841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 13:48:02.628272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 13:48:02.679178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:48:02.679824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 13:48:02.713000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:48:02.745011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:48:02.807930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 13:48:02.855374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 13:48:02.901084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:48:02.908158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 13:48:02.953400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:48:02.976892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 13:48:03.018088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:48:03.027582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 13:48:03.033893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1124 13:48:04.884430       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: I1124 13:48:09.315253    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55zsb\" (UniqueName: \"kubernetes.io/projected/2e67d44e-9eb4-4bb7-a087-a76def391cbb-kube-api-access-55zsb\") pod \"kube-proxy-5vj5p\" (UID: \"2e67d44e-9eb4-4bb7-a087-a76def391cbb\") " pod="kube-system/kube-proxy-5vj5p"
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: I1124 13:48:09.315312    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc580d4e-c35b-4def-94d4-43697fee08ef-xtables-lock\") pod \"kindnet-zqlgn\" (UID: \"dc580d4e-c35b-4def-94d4-43697fee08ef\") " pod="kube-system/kindnet-zqlgn"
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: I1124 13:48:09.315333    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc580d4e-c35b-4def-94d4-43697fee08ef-lib-modules\") pod \"kindnet-zqlgn\" (UID: \"dc580d4e-c35b-4def-94d4-43697fee08ef\") " pod="kube-system/kindnet-zqlgn"
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: I1124 13:48:09.315358    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfcz6\" (UniqueName: \"kubernetes.io/projected/dc580d4e-c35b-4def-94d4-43697fee08ef-kube-api-access-jfcz6\") pod \"kindnet-zqlgn\" (UID: \"dc580d4e-c35b-4def-94d4-43697fee08ef\") " pod="kube-system/kindnet-zqlgn"
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: I1124 13:48:09.315383    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dc580d4e-c35b-4def-94d4-43697fee08ef-cni-cfg\") pod \"kindnet-zqlgn\" (UID: \"dc580d4e-c35b-4def-94d4-43697fee08ef\") " pod="kube-system/kindnet-zqlgn"
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: I1124 13:48:09.315404    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e67d44e-9eb4-4bb7-a087-a76def391cbb-lib-modules\") pod \"kube-proxy-5vj5p\" (UID: \"2e67d44e-9eb4-4bb7-a087-a76def391cbb\") " pod="kube-system/kube-proxy-5vj5p"
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: I1124 13:48:09.315461    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e67d44e-9eb4-4bb7-a087-a76def391cbb-kube-proxy\") pod \"kube-proxy-5vj5p\" (UID: \"2e67d44e-9eb4-4bb7-a087-a76def391cbb\") " pod="kube-system/kube-proxy-5vj5p"
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: I1124 13:48:09.315515    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e67d44e-9eb4-4bb7-a087-a76def391cbb-xtables-lock\") pod \"kube-proxy-5vj5p\" (UID: \"2e67d44e-9eb4-4bb7-a087-a76def391cbb\") " pod="kube-system/kube-proxy-5vj5p"
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: E1124 13:48:09.423403    2128 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: E1124 13:48:09.423447    2128 projected.go:196] Error preparing data for projected volume kube-api-access-jfcz6 for pod kube-system/kindnet-zqlgn: configmap "kube-root-ca.crt" not found
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: E1124 13:48:09.423403    2128 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: E1124 13:48:09.423530    2128 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dc580d4e-c35b-4def-94d4-43697fee08ef-kube-api-access-jfcz6 podName:dc580d4e-c35b-4def-94d4-43697fee08ef nodeName:}" failed. No retries permitted until 2025-11-24 13:48:09.923496635 +0000 UTC m=+5.608589954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jfcz6" (UniqueName: "kubernetes.io/projected/dc580d4e-c35b-4def-94d4-43697fee08ef-kube-api-access-jfcz6") pod "kindnet-zqlgn" (UID: "dc580d4e-c35b-4def-94d4-43697fee08ef") : configmap "kube-root-ca.crt" not found
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: E1124 13:48:09.423539    2128 projected.go:196] Error preparing data for projected volume kube-api-access-55zsb for pod kube-system/kube-proxy-5vj5p: configmap "kube-root-ca.crt" not found
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: E1124 13:48:09.423599    2128 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2e67d44e-9eb4-4bb7-a087-a76def391cbb-kube-api-access-55zsb podName:2e67d44e-9eb4-4bb7-a087-a76def391cbb nodeName:}" failed. No retries permitted until 2025-11-24 13:48:09.923579676 +0000 UTC m=+5.608672986 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-55zsb" (UniqueName: "kubernetes.io/projected/2e67d44e-9eb4-4bb7-a087-a76def391cbb-kube-api-access-55zsb") pod "kube-proxy-5vj5p" (UID: "2e67d44e-9eb4-4bb7-a087-a76def391cbb") : configmap "kube-root-ca.crt" not found
	Nov 24 13:48:10 no-preload-608395 kubelet[2128]: I1124 13:48:10.458684    2128 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5vj5p" podStartSLOduration=1.458660564 podStartE2EDuration="1.458660564s" podCreationTimestamp="2025-11-24 13:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:48:10.45866162 +0000 UTC m=+6.143754938" watchObservedRunningTime="2025-11-24 13:48:10.458660564 +0000 UTC m=+6.143753882"
	Nov 24 13:48:14 no-preload-608395 kubelet[2128]: I1124 13:48:14.470969    2128 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-zqlgn" podStartSLOduration=2.500270355 podStartE2EDuration="5.470902852s" podCreationTimestamp="2025-11-24 13:48:09 +0000 UTC" firstStartedPulling="2025-11-24 13:48:10.528340574 +0000 UTC m=+6.213433877" lastFinishedPulling="2025-11-24 13:48:13.498973073 +0000 UTC m=+9.184066374" observedRunningTime="2025-11-24 13:48:14.4593351 +0000 UTC m=+10.144428418" watchObservedRunningTime="2025-11-24 13:48:14.470902852 +0000 UTC m=+10.155996169"
	Nov 24 13:48:24 no-preload-608395 kubelet[2128]: I1124 13:48:24.041807    2128 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 13:48:24 no-preload-608395 kubelet[2128]: I1124 13:48:24.107895    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb7xn\" (UniqueName: \"kubernetes.io/projected/c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa-kube-api-access-rb7xn\") pod \"storage-provisioner\" (UID: \"c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa\") " pod="kube-system/storage-provisioner"
	Nov 24 13:48:24 no-preload-608395 kubelet[2128]: I1124 13:48:24.107983    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a909252f-b923-46e8-acff-b0d0943c4a29-config-volume\") pod \"coredns-66bc5c9577-rcf8v\" (UID: \"a909252f-b923-46e8-acff-b0d0943c4a29\") " pod="kube-system/coredns-66bc5c9577-rcf8v"
	Nov 24 13:48:24 no-preload-608395 kubelet[2128]: I1124 13:48:24.108001    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqnm6\" (UniqueName: \"kubernetes.io/projected/a909252f-b923-46e8-acff-b0d0943c4a29-kube-api-access-qqnm6\") pod \"coredns-66bc5c9577-rcf8v\" (UID: \"a909252f-b923-46e8-acff-b0d0943c4a29\") " pod="kube-system/coredns-66bc5c9577-rcf8v"
	Nov 24 13:48:24 no-preload-608395 kubelet[2128]: I1124 13:48:24.108026    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa-tmp\") pod \"storage-provisioner\" (UID: \"c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa\") " pod="kube-system/storage-provisioner"
	Nov 24 13:48:25 no-preload-608395 kubelet[2128]: I1124 13:48:25.487014    2128 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rcf8v" podStartSLOduration=16.48687978 podStartE2EDuration="16.48687978s" podCreationTimestamp="2025-11-24 13:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:48:25.486827527 +0000 UTC m=+21.171920848" watchObservedRunningTime="2025-11-24 13:48:25.48687978 +0000 UTC m=+21.171973101"
	Nov 24 13:48:27 no-preload-608395 kubelet[2128]: I1124 13:48:27.701742    2128 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=17.701716111 podStartE2EDuration="17.701716111s" podCreationTimestamp="2025-11-24 13:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:48:25.512975581 +0000 UTC m=+21.198068913" watchObservedRunningTime="2025-11-24 13:48:27.701716111 +0000 UTC m=+23.386809429"
	Nov 24 13:48:27 no-preload-608395 kubelet[2128]: I1124 13:48:27.731241    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8xzb\" (UniqueName: \"kubernetes.io/projected/e09b20ec-b541-4478-9c67-c55b56ae8991-kube-api-access-p8xzb\") pod \"busybox\" (UID: \"e09b20ec-b541-4478-9c67-c55b56ae8991\") " pod="default/busybox"
	Nov 24 13:48:30 no-preload-608395 kubelet[2128]: I1124 13:48:30.499489    2128 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.335491178 podStartE2EDuration="3.499466503s" podCreationTimestamp="2025-11-24 13:48:27 +0000 UTC" firstStartedPulling="2025-11-24 13:48:28.144692632 +0000 UTC m=+23.829785929" lastFinishedPulling="2025-11-24 13:48:30.308667942 +0000 UTC m=+25.993761254" observedRunningTime="2025-11-24 13:48:30.49935399 +0000 UTC m=+26.184447308" watchObservedRunningTime="2025-11-24 13:48:30.499466503 +0000 UTC m=+26.184559821"
	
	
	==> storage-provisioner [8507f470f3a86296f0a16f1905dfcfc9305722b3ab50ce2e78a13d8f8acddb22] <==
	I1124 13:48:24.587750       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 13:48:24.597787       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 13:48:24.597855       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 13:48:24.600788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:24.606113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 13:48:24.606397       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 13:48:24.606646       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-608395_58da3de6-110c-42ba-ae46-08bea4778988!
	I1124 13:48:24.606790       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1725d06e-f0b5-414f-b855-627c3860c519", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-608395_58da3de6-110c-42ba-ae46-08bea4778988 became leader
	W1124 13:48:24.608881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:24.613215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 13:48:24.706978       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-608395_58da3de6-110c-42ba-ae46-08bea4778988!
	W1124 13:48:26.617331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:26.623469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:28.627192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:28.631977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:30.635249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:30.640668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:32.643448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:32.647985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:34.651906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:34.657673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:36.661156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:36.666197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-608395 -n no-preload-608395
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-608395 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-608395
helpers_test.go:243: (dbg) docker inspect no-preload-608395:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a2cfa332a8b5a7653329ee2f376e65aae38a42fb563cebe264c8be1149451517",
	        "Created": "2025-11-24T13:47:36.064034647Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 610011,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:47:36.107803041Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/a2cfa332a8b5a7653329ee2f376e65aae38a42fb563cebe264c8be1149451517/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a2cfa332a8b5a7653329ee2f376e65aae38a42fb563cebe264c8be1149451517/hostname",
	        "HostsPath": "/var/lib/docker/containers/a2cfa332a8b5a7653329ee2f376e65aae38a42fb563cebe264c8be1149451517/hosts",
	        "LogPath": "/var/lib/docker/containers/a2cfa332a8b5a7653329ee2f376e65aae38a42fb563cebe264c8be1149451517/a2cfa332a8b5a7653329ee2f376e65aae38a42fb563cebe264c8be1149451517-json.log",
	        "Name": "/no-preload-608395",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-608395:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-608395",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a2cfa332a8b5a7653329ee2f376e65aae38a42fb563cebe264c8be1149451517",
	                "LowerDir": "/var/lib/docker/overlay2/07db3b81c9ae03654a1edfc8ae28fb3d1574335a879cbcd8db0ec3d1b8c2b022-init/diff:/var/lib/docker/overlay2/0f013e03fd0eaee4efc608fb0376e7d3e8ba628388f5191310c2259ab273ad26/diff",
	                "MergedDir": "/var/lib/docker/overlay2/07db3b81c9ae03654a1edfc8ae28fb3d1574335a879cbcd8db0ec3d1b8c2b022/merged",
	                "UpperDir": "/var/lib/docker/overlay2/07db3b81c9ae03654a1edfc8ae28fb3d1574335a879cbcd8db0ec3d1b8c2b022/diff",
	                "WorkDir": "/var/lib/docker/overlay2/07db3b81c9ae03654a1edfc8ae28fb3d1574335a879cbcd8db0ec3d1b8c2b022/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-608395",
	                "Source": "/var/lib/docker/volumes/no-preload-608395/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-608395",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-608395",
	                "name.minikube.sigs.k8s.io": "no-preload-608395",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "35e6c740b9266e02a48be0cb2494d2f8cd35e6377b15b9409b954948115a5bee",
	            "SandboxKey": "/var/run/docker/netns/35e6c740b926",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-608395": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "85e2905f6131e6f4ab94166eee446126fc1d6139a5452c9dd9a7c77abe756db0",
	                    "EndpointID": "ca65671436ff405263c2edcb381a8d49767e507c49f609ebdb40212efcfa2c6b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "ce:1e:b1:5e:7d:83",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-608395",
	                        "a2cfa332a8b5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-608395 -n no-preload-608395
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-608395 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-608395 logs -n 25: (1.227290537s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-355661 sudo containerd config dump                                                                                                                                                                                                        │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ ssh     │ -p cilium-355661 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ ssh     │ -p cilium-355661 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ start   │ -p NoKubernetes-787855 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                         │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │ 24 Nov 25 13:46 UTC │
	│ ssh     │ -p cilium-355661 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ ssh     │ -p cilium-355661 sudo crio config                                                                                                                                                                                                                   │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │                     │
	│ delete  │ -p cilium-355661                                                                                                                                                                                                                                    │ cilium-355661             │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │ 24 Nov 25 13:46 UTC │
	│ start   │ -p force-systemd-flag-775412 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                   │ force-systemd-flag-775412 │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │ 24 Nov 25 13:47 UTC │
	│ delete  │ -p NoKubernetes-787855                                                                                                                                                                                                                              │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │ 24 Nov 25 13:46 UTC │
	│ start   │ -p NoKubernetes-787855 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                         │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:46 UTC │ 24 Nov 25 13:47 UTC │
	│ ssh     │ force-systemd-flag-775412 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-775412 │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ delete  │ -p force-systemd-flag-775412                                                                                                                                                                                                                        │ force-systemd-flag-775412 │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ ssh     │ -p NoKubernetes-787855 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │                     │
	│ start   │ -p cert-options-342221 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-342221       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ stop    │ -p NoKubernetes-787855                                                                                                                                                                                                                              │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ start   │ -p NoKubernetes-787855 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ ssh     │ cert-options-342221 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-342221       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ ssh     │ -p cert-options-342221 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-342221       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ delete  │ -p cert-options-342221                                                                                                                                                                                                                              │ cert-options-342221       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ start   │ -p old-k8s-version-513442 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-513442    │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:48 UTC │
	│ ssh     │ -p NoKubernetes-787855 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │                     │
	│ delete  │ -p NoKubernetes-787855                                                                                                                                                                                                                              │ NoKubernetes-787855       │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:47 UTC │
	│ start   │ -p no-preload-608395 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-608395         │ jenkins │ v1.37.0 │ 24 Nov 25 13:47 UTC │ 24 Nov 25 13:48 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-513442 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-513442    │ jenkins │ v1.37.0 │ 24 Nov 25 13:48 UTC │ 24 Nov 25 13:48 UTC │
	│ stop    │ -p old-k8s-version-513442 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-513442    │ jenkins │ v1.37.0 │ 24 Nov 25 13:48 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:47:35
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:47:35.072446  608917 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:47:35.072749  608917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:47:35.072763  608917 out.go:374] Setting ErrFile to fd 2...
	I1124 13:47:35.072768  608917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:47:35.073046  608917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
	I1124 13:47:35.073526  608917 out.go:368] Setting JSON to false
	I1124 13:47:35.074857  608917 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8994,"bootTime":1763983061,"procs":340,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:47:35.074959  608917 start.go:143] virtualization: kvm guest
	I1124 13:47:35.077490  608917 out.go:179] * [no-preload-608395] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:47:35.079255  608917 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:47:35.079255  608917 notify.go:221] Checking for updates...
	I1124 13:47:35.080776  608917 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:47:35.082396  608917 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:47:35.083932  608917 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-370498/.minikube
	I1124 13:47:35.085251  608917 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:47:35.086603  608917 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:47:35.089427  608917 config.go:182] Loaded profile config "cert-expiration-099863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:47:35.089575  608917 config.go:182] Loaded profile config "kubernetes-upgrade-358357": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:47:35.089706  608917 config.go:182] Loaded profile config "old-k8s-version-513442": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 13:47:35.089837  608917 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:47:35.114581  608917 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:47:35.114769  608917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:47:35.180508  608917 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:58 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 13:47:35.169616068 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:47:35.180627  608917 docker.go:319] overlay module found
	I1124 13:47:35.182258  608917 out.go:179] * Using the docker driver based on user configuration
	I1124 13:47:35.183642  608917 start.go:309] selected driver: docker
	I1124 13:47:35.183663  608917 start.go:927] validating driver "docker" against <nil>
	I1124 13:47:35.183675  608917 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:47:35.184437  608917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:47:35.249663  608917 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:58 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 13:47:35.237755455 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:47:35.249975  608917 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:47:35.250402  608917 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:47:35.252318  608917 out.go:179] * Using Docker driver with root privileges
	I1124 13:47:35.254354  608917 cni.go:84] Creating CNI manager for ""
	I1124 13:47:35.254446  608917 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:47:35.254457  608917 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:47:35.254652  608917 start.go:353] cluster config:
	{Name:no-preload-608395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-608395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:47:35.256201  608917 out.go:179] * Starting "no-preload-608395" primary control-plane node in "no-preload-608395" cluster
	I1124 13:47:35.257392  608917 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 13:47:35.258857  608917 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:47:35.260330  608917 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 13:47:35.260404  608917 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:47:35.260496  608917 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/config.json ...
	I1124 13:47:35.260537  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/config.json: {Name:mk2f4d5eff7070dcec35f39f30e01cd0b3fcce8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:35.260546  608917 cache.go:107] acquiring lock: {Name:mk28ec677a69a6f418643b8b89331fa25b8c42f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260546  608917 cache.go:107] acquiring lock: {Name:mkad3cbb6fa2e7f41e4d7c0e1e3c74156dc55521 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260557  608917 cache.go:107] acquiring lock: {Name:mk7aef7fc4ff6e4e4541fdeb1d5e26c13a66856b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260584  608917 cache.go:107] acquiring lock: {Name:mk586ecbe7f4b4aab48f8ad28d0d7b1848898c9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260604  608917 cache.go:107] acquiring lock: {Name:mkf548ea8c9721a4e4ad1e37073c3deea8530810 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260622  608917 cache.go:107] acquiring lock: {Name:mk1ce266bd6b9003a6a371facbc84809dce0c3c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260651  608917 cache.go:107] acquiring lock: {Name:mk687b2dcc146d43e9d607f472f2f08a2307baed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260663  608917 cache.go:107] acquiring lock: {Name:mk4b559f0fdae6e96edea26981618bf8d9d50b2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.260712  608917 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:35.260755  608917 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:35.260801  608917 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:35.260819  608917 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:35.260852  608917 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:35.260858  608917 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 13:47:35.260727  608917 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:35.261039  608917 cache.go:115] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 13:47:35.261050  608917 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 523.955µs
	I1124 13:47:35.261069  608917 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 13:47:35.262249  608917 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:35.262277  608917 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:35.262359  608917 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:35.262407  608917 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 13:47:35.262461  608917 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:35.262522  608917 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:35.262735  608917 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:35.285963  608917 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 13:47:35.285989  608917 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 13:47:35.286014  608917 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:47:35.286057  608917 start.go:360] acquireMachinesLock for no-preload-608395: {Name:mkc9d1cf0cec9be2b369f1e47c690fc0399e88e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:47:35.286191  608917 start.go:364] duration metric: took 102.178µs to acquireMachinesLock for "no-preload-608395"
	I1124 13:47:35.286224  608917 start.go:93] Provisioning new machine with config: &{Name:no-preload-608395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-608395 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 13:47:35.286330  608917 start.go:125] createHost starting for "" (driver="docker")
	I1124 13:47:30.558317  607669 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 13:47:30.558626  607669 start.go:159] libmachine.API.Create for "old-k8s-version-513442" (driver="docker")
	I1124 13:47:30.558656  607669 client.go:173] LocalClient.Create starting
	I1124 13:47:30.558725  607669 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem
	I1124 13:47:30.558754  607669 main.go:143] libmachine: Decoding PEM data...
	I1124 13:47:30.558772  607669 main.go:143] libmachine: Parsing certificate...
	I1124 13:47:30.558826  607669 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem
	I1124 13:47:30.558849  607669 main.go:143] libmachine: Decoding PEM data...
	I1124 13:47:30.558860  607669 main.go:143] libmachine: Parsing certificate...
	I1124 13:47:30.559212  607669 cli_runner.go:164] Run: docker network inspect old-k8s-version-513442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 13:47:30.577139  607669 cli_runner.go:211] docker network inspect old-k8s-version-513442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 13:47:30.577245  607669 network_create.go:284] running [docker network inspect old-k8s-version-513442] to gather additional debugging logs...
	I1124 13:47:30.577276  607669 cli_runner.go:164] Run: docker network inspect old-k8s-version-513442
	W1124 13:47:30.593786  607669 cli_runner.go:211] docker network inspect old-k8s-version-513442 returned with exit code 1
	I1124 13:47:30.593826  607669 network_create.go:287] error running [docker network inspect old-k8s-version-513442]: docker network inspect old-k8s-version-513442: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-513442 not found
	I1124 13:47:30.593854  607669 network_create.go:289] output of [docker network inspect old-k8s-version-513442]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-513442 not found
	
	** /stderr **
	I1124 13:47:30.594026  607669 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:47:30.613315  607669 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8afb578efdfa IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:5e:46:43:aa:fe} reservation:<nil>}
	I1124 13:47:30.614364  607669 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ca3a55f53176 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:98:62:4c:91:8f} reservation:<nil>}
	I1124 13:47:30.614827  607669 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e11236ccf9ba IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:36:3b:80:be:95:34} reservation:<nil>}
	I1124 13:47:30.615410  607669 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-35b7bf6fd97a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5a:12:4e:d4:19:26} reservation:<nil>}
	I1124 13:47:30.616018  607669 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-1f5932eecbe7 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:aa:ff:d3:cd:de:0f} reservation:<nil>}
	I1124 13:47:30.617269  607669 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e7fa00}
	I1124 13:47:30.617308  607669 network_create.go:124] attempt to create docker network old-k8s-version-513442 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1124 13:47:30.617398  607669 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-513442 old-k8s-version-513442
	I1124 13:47:30.671102  607669 network_create.go:108] docker network old-k8s-version-513442 192.168.94.0/24 created
	I1124 13:47:30.671150  607669 kic.go:121] calculated static IP "192.168.94.2" for the "old-k8s-version-513442" container
	I1124 13:47:30.671218  607669 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 13:47:30.689078  607669 cli_runner.go:164] Run: docker volume create old-k8s-version-513442 --label name.minikube.sigs.k8s.io=old-k8s-version-513442 --label created_by.minikube.sigs.k8s.io=true
	I1124 13:47:30.709312  607669 oci.go:103] Successfully created a docker volume old-k8s-version-513442
	I1124 13:47:30.709408  607669 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-513442-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-513442 --entrypoint /usr/bin/test -v old-k8s-version-513442:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 13:47:31.132905  607669 oci.go:107] Successfully prepared a docker volume old-k8s-version-513442
	I1124 13:47:31.132980  607669 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 13:47:31.132992  607669 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 13:47:31.133075  607669 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-513442:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 13:47:35.011677  607669 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-513442:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (3.878547269s)
	I1124 13:47:35.011716  607669 kic.go:203] duration metric: took 3.878721361s to extract preloaded images to volume ...
	W1124 13:47:35.011796  607669 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 13:47:35.011829  607669 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 13:47:35.011871  607669 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:47:35.073961  607669 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-513442 --name old-k8s-version-513442 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-513442 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-513442 --network old-k8s-version-513442 --ip 192.168.94.2 --volume old-k8s-version-513442:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:47:32.801968  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:47:32.802485  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:47:32.802542  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:47:32.802595  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:47:32.832902  572647 cri.go:89] found id: "6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:32.832956  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:32.832963  572647 cri.go:89] found id: ""
	I1124 13:47:32.832972  572647 logs.go:282] 2 containers: [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:47:32.833038  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:32.837621  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:32.841927  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:47:32.842013  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:47:32.877193  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:32.877214  572647 cri.go:89] found id: ""
	I1124 13:47:32.877223  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:47:32.877290  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:32.882239  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:47:32.882329  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:47:32.912677  572647 cri.go:89] found id: ""
	I1124 13:47:32.912709  572647 logs.go:282] 0 containers: []
	W1124 13:47:32.912727  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:47:32.912735  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:47:32.912799  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:47:32.942634  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:32.942656  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:32.942662  572647 cri.go:89] found id: ""
	I1124 13:47:32.942672  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:47:32.942735  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:32.947427  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:32.951442  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:47:32.951519  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:47:32.982583  572647 cri.go:89] found id: ""
	I1124 13:47:32.982614  572647 logs.go:282] 0 containers: []
	W1124 13:47:32.982626  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:47:32.982635  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:47:32.982706  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:47:33.013412  572647 cri.go:89] found id: "daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:33.013432  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:33.013435  572647 cri.go:89] found id: ""
	I1124 13:47:33.013444  572647 logs.go:282] 2 containers: [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:47:33.013492  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:33.017848  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:33.021955  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:47:33.022038  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:47:33.055691  572647 cri.go:89] found id: ""
	I1124 13:47:33.055722  572647 logs.go:282] 0 containers: []
	W1124 13:47:33.055733  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:47:33.055743  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:47:33.055822  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:47:33.086844  572647 cri.go:89] found id: ""
	I1124 13:47:33.086868  572647 logs.go:282] 0 containers: []
	W1124 13:47:33.086877  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:47:33.086887  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:47:33.086904  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:33.140737  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:47:33.140775  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:33.185221  572647 logs.go:123] Gathering logs for kube-controller-manager [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf] ...
	I1124 13:47:33.185259  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:33.218642  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:47:33.218669  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:47:33.251506  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:47:33.251634  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:47:33.346627  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:47:33.346672  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:47:33.363530  572647 logs.go:123] Gathering logs for kube-apiserver [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8] ...
	I1124 13:47:33.363571  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:33.400997  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:47:33.401042  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:33.446051  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:47:33.446088  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:33.484418  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:47:33.484465  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:47:33.537056  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:47:33.537093  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:47:33.611727  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:47:33.611762  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:47:33.611778  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:36.150015  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:47:36.150435  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:47:36.150499  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:47:36.150559  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:47:36.181496  572647 cri.go:89] found id: "6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:36.181524  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:36.181530  572647 cri.go:89] found id: ""
	I1124 13:47:36.181541  572647 logs.go:282] 2 containers: [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:47:36.181626  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.186587  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.190995  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:47:36.191076  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:47:35.288531  608917 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 13:47:35.288826  608917 start.go:159] libmachine.API.Create for "no-preload-608395" (driver="docker")
	I1124 13:47:35.288879  608917 client.go:173] LocalClient.Create starting
	I1124 13:47:35.288981  608917 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem
	I1124 13:47:35.289027  608917 main.go:143] libmachine: Decoding PEM data...
	I1124 13:47:35.289053  608917 main.go:143] libmachine: Parsing certificate...
	I1124 13:47:35.289129  608917 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem
	I1124 13:47:35.289159  608917 main.go:143] libmachine: Decoding PEM data...
	I1124 13:47:35.289172  608917 main.go:143] libmachine: Parsing certificate...
	I1124 13:47:35.289667  608917 cli_runner.go:164] Run: docker network inspect no-preload-608395 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 13:47:35.309178  608917 cli_runner.go:211] docker network inspect no-preload-608395 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 13:47:35.309257  608917 network_create.go:284] running [docker network inspect no-preload-608395] to gather additional debugging logs...
	I1124 13:47:35.309283  608917 cli_runner.go:164] Run: docker network inspect no-preload-608395
	W1124 13:47:35.328323  608917 cli_runner.go:211] docker network inspect no-preload-608395 returned with exit code 1
	I1124 13:47:35.328350  608917 network_create.go:287] error running [docker network inspect no-preload-608395]: docker network inspect no-preload-608395: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-608395 not found
	I1124 13:47:35.328362  608917 network_create.go:289] output of [docker network inspect no-preload-608395]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-608395 not found
	
	** /stderr **
	I1124 13:47:35.328448  608917 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:47:35.351281  608917 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8afb578efdfa IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:5e:46:43:aa:fe} reservation:<nil>}
	I1124 13:47:35.352105  608917 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ca3a55f53176 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:98:62:4c:91:8f} reservation:<nil>}
	I1124 13:47:35.352583  608917 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e11236ccf9ba IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:36:3b:80:be:95:34} reservation:<nil>}
	I1124 13:47:35.353066  608917 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-35b7bf6fd97a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5a:12:4e:d4:19:26} reservation:<nil>}
	I1124 13:47:35.353566  608917 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-1f5932eecbe7 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:aa:ff:d3:cd:de:0f} reservation:<nil>}
	I1124 13:47:35.354145  608917 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-57f535f2d59b IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:6e:28:a9:1e:8a:96} reservation:<nil>}
	I1124 13:47:35.354775  608917 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d86bc0}
	I1124 13:47:35.354805  608917 network_create.go:124] attempt to create docker network no-preload-608395 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1124 13:47:35.354861  608917 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-608395 no-preload-608395
	I1124 13:47:35.432539  608917 network_create.go:108] docker network no-preload-608395 192.168.103.0/24 created
	I1124 13:47:35.432598  608917 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-608395" container
	I1124 13:47:35.432695  608917 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 13:47:35.453593  608917 cli_runner.go:164] Run: docker volume create no-preload-608395 --label name.minikube.sigs.k8s.io=no-preload-608395 --label created_by.minikube.sigs.k8s.io=true
	I1124 13:47:35.471825  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1124 13:47:35.475329  608917 oci.go:103] Successfully created a docker volume no-preload-608395
	I1124 13:47:35.475418  608917 cli_runner.go:164] Run: docker run --rm --name no-preload-608395-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-608395 --entrypoint /usr/bin/test -v no-preload-608395:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 13:47:35.484374  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1124 13:47:35.522730  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1124 13:47:35.528813  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1124 13:47:35.529239  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1124 13:47:35.541677  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1124 13:47:35.561542  608917 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1124 13:47:35.640840  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1124 13:47:35.640868  608917 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 380.250244ms
	I1124 13:47:35.640883  608917 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 13:47:35.985260  608917 oci.go:107] Successfully prepared a docker volume no-preload-608395
	I1124 13:47:35.985319  608917 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	W1124 13:47:35.985414  608917 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 13:47:35.985453  608917 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 13:47:35.985506  608917 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:47:36.047047  608917 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-608395 --name no-preload-608395 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-608395 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-608395 --network no-preload-608395 --ip 192.168.103.2 --volume no-preload-608395:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:47:36.258467  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1124 13:47:36.258503  608917 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 997.955969ms
	I1124 13:47:36.258519  608917 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1124 13:47:36.410125  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Running}}
	I1124 13:47:36.432289  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:47:36.453312  608917 cli_runner.go:164] Run: docker exec no-preload-608395 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:47:36.504193  608917 oci.go:144] the created container "no-preload-608395" has a running status.
	I1124 13:47:36.504226  608917 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa...
	I1124 13:47:36.604837  608917 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:47:36.631267  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:47:36.655799  608917 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:47:36.655830  608917 kic_runner.go:114] Args: [docker exec --privileged no-preload-608395 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:47:36.705661  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:47:36.729778  608917 machine.go:94] provisionDockerMachine start ...
	I1124 13:47:36.729884  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:36.756901  608917 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:36.757367  608917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1124 13:47:36.757380  608917 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:47:36.758446  608917 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 13:47:37.510037  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1124 13:47:37.510068  608917 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 2.249448579s
	I1124 13:47:37.510081  608917 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1124 13:47:37.572176  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1124 13:47:37.572211  608917 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 2.31168357s
	I1124 13:47:37.572229  608917 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1124 13:47:37.595833  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1124 13:47:37.595868  608917 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 2.335217312s
	I1124 13:47:37.595886  608917 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1124 13:47:37.719899  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1124 13:47:37.719956  608917 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 2.45935214s
	I1124 13:47:37.719969  608917 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1124 13:47:38.059972  608917 cache.go:157] /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1124 13:47:38.060022  608917 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.799433794s
	I1124 13:47:38.060036  608917 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1124 13:47:38.060055  608917 cache.go:87] Successfully saved all images to host disk.
	I1124 13:47:39.915534  608917 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-608395
	
	I1124 13:47:39.915567  608917 ubuntu.go:182] provisioning hostname "no-preload-608395"
	I1124 13:47:39.915651  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:39.936421  608917 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:39.936658  608917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1124 13:47:39.936672  608917 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-608395 && echo "no-preload-608395" | sudo tee /etc/hostname
	I1124 13:47:35.415632  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Running}}
	I1124 13:47:35.436407  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:47:35.457824  607669 cli_runner.go:164] Run: docker exec old-k8s-version-513442 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:47:35.505936  607669 oci.go:144] the created container "old-k8s-version-513442" has a running status.
	I1124 13:47:35.505993  607669 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa...
	I1124 13:47:35.536159  607669 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:47:35.565751  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:47:35.587350  607669 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:47:35.587376  607669 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-513442 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:47:35.639485  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:47:35.659275  607669 machine.go:94] provisionDockerMachine start ...
	I1124 13:47:35.659377  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:35.682791  607669 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:35.683193  607669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1124 13:47:35.683215  607669 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:47:35.683887  607669 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57402->127.0.0.1:33435: read: connection reset by peer
	I1124 13:47:38.829345  607669 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-513442
	
	I1124 13:47:38.829376  607669 ubuntu.go:182] provisioning hostname "old-k8s-version-513442"
	I1124 13:47:38.829451  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:38.847276  607669 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:38.847521  607669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1124 13:47:38.847540  607669 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-513442 && echo "old-k8s-version-513442" | sudo tee /etc/hostname
	I1124 13:47:39.005190  607669 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-513442
	
	I1124 13:47:39.005277  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.023623  607669 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:39.023848  607669 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1124 13:47:39.023866  607669 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-513442' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-513442/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-513442' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:47:39.170228  607669 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:47:39.170266  607669 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-370498/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-370498/.minikube}
	I1124 13:47:39.170286  607669 ubuntu.go:190] setting up certificates
	I1124 13:47:39.170295  607669 provision.go:84] configureAuth start
	I1124 13:47:39.170348  607669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-513442
	I1124 13:47:39.189446  607669 provision.go:143] copyHostCerts
	I1124 13:47:39.189521  607669 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem, removing ...
	I1124 13:47:39.189536  607669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem
	I1124 13:47:39.189619  607669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem (1082 bytes)
	I1124 13:47:39.189751  607669 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem, removing ...
	I1124 13:47:39.189764  607669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem
	I1124 13:47:39.189810  607669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem (1123 bytes)
	I1124 13:47:39.189989  607669 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem, removing ...
	I1124 13:47:39.190006  607669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem
	I1124 13:47:39.190054  607669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem (1675 bytes)
	I1124 13:47:39.190154  607669 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-513442 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-513442]
	I1124 13:47:39.227079  607669 provision.go:177] copyRemoteCerts
	I1124 13:47:39.227139  607669 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:47:39.227177  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.244951  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:47:39.349311  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 13:47:39.371319  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 13:47:39.391311  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 13:47:39.411071  607669 provision.go:87] duration metric: took 240.75737ms to configureAuth
	I1124 13:47:39.411102  607669 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:47:39.411303  607669 config.go:182] Loaded profile config "old-k8s-version-513442": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 13:47:39.411317  607669 machine.go:97] duration metric: took 3.752022568s to provisionDockerMachine
	I1124 13:47:39.411325  607669 client.go:176] duration metric: took 8.852661553s to LocalClient.Create
	I1124 13:47:39.411358  607669 start.go:167] duration metric: took 8.852720089s to libmachine.API.Create "old-k8s-version-513442"
	I1124 13:47:39.411372  607669 start.go:293] postStartSetup for "old-k8s-version-513442" (driver="docker")
	I1124 13:47:39.411388  607669 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:47:39.411452  607669 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:47:39.411508  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.429085  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:47:39.536320  607669 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:47:39.540367  607669 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:47:39.540402  607669 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:47:39.540414  607669 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/addons for local assets ...
	I1124 13:47:39.540470  607669 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/files for local assets ...
	I1124 13:47:39.540543  607669 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem -> 3741222.pem in /etc/ssl/certs
	I1124 13:47:39.540631  607669 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 13:47:39.549275  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:47:39.573695  607669 start.go:296] duration metric: took 162.301306ms for postStartSetup
	I1124 13:47:39.574191  607669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-513442
	I1124 13:47:39.593438  607669 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/config.json ...
	I1124 13:47:39.593801  607669 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:47:39.593897  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.615008  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:47:39.717288  607669 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:47:39.722340  607669 start.go:128] duration metric: took 9.166080327s to createHost
	I1124 13:47:39.722370  607669 start.go:83] releasing machines lock for "old-k8s-version-513442", held for 9.166275546s
	I1124 13:47:39.722447  607669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-513442
	I1124 13:47:39.743680  607669 ssh_runner.go:195] Run: cat /version.json
	I1124 13:47:39.743731  607669 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:47:39.743745  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.743812  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:47:39.763336  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:47:39.763737  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:47:39.929805  607669 ssh_runner.go:195] Run: systemctl --version
	I1124 13:47:39.938447  607669 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:47:39.944068  607669 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:47:39.944147  607669 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:47:39.974609  607669 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 13:47:39.974641  607669 start.go:496] detecting cgroup driver to use...
	I1124 13:47:39.974679  607669 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 13:47:39.974728  607669 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 13:47:39.990824  607669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 13:47:40.004856  607669 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:47:40.004920  607669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:47:40.024248  607669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:47:40.044433  607669 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:47:40.145638  607669 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:47:40.247759  607669 docker.go:234] disabling docker service ...
	I1124 13:47:40.247829  607669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:47:40.269922  607669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:47:40.284840  607669 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:47:40.379978  607669 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:47:40.471616  607669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:47:40.485207  607669 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:47:40.501980  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1124 13:47:40.513545  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 13:47:40.524134  607669 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 13:47:40.524215  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 13:47:40.533927  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:47:40.543474  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 13:47:40.553177  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:47:40.563129  607669 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:47:40.572813  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 13:47:40.583799  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 13:47:40.593872  607669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 13:47:40.604166  607669 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:47:40.612262  607669 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:47:40.620472  607669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:47:40.706065  607669 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 13:47:40.809269  607669 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 13:47:40.809335  607669 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 13:47:40.814110  607669 start.go:564] Will wait 60s for crictl version
	I1124 13:47:40.814187  607669 ssh_runner.go:195] Run: which crictl
	I1124 13:47:40.818745  607669 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:47:40.843808  607669 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 13:47:40.843877  607669 ssh_runner.go:195] Run: containerd --version
	I1124 13:47:40.865477  607669 ssh_runner.go:195] Run: containerd --version
	I1124 13:47:40.893673  607669 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1124 13:47:36.234464  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:36.234492  572647 cri.go:89] found id: ""
	I1124 13:47:36.234504  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:47:36.234584  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.240249  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:47:36.240335  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:47:36.279967  572647 cri.go:89] found id: ""
	I1124 13:47:36.279998  572647 logs.go:282] 0 containers: []
	W1124 13:47:36.280009  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:47:36.280027  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:47:36.280082  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:47:36.313257  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:36.313286  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:36.313292  572647 cri.go:89] found id: ""
	I1124 13:47:36.313302  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:47:36.313364  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.317818  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.322103  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:47:36.322170  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:47:36.352450  572647 cri.go:89] found id: ""
	I1124 13:47:36.352485  572647 logs.go:282] 0 containers: []
	W1124 13:47:36.352497  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:47:36.352506  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:47:36.352569  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:47:36.381849  572647 cri.go:89] found id: "daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:36.381876  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:36.381881  572647 cri.go:89] found id: ""
	I1124 13:47:36.381896  572647 logs.go:282] 2 containers: [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:47:36.381995  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.386540  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:36.391244  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:47:36.391326  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:47:36.425813  572647 cri.go:89] found id: ""
	I1124 13:47:36.425845  572647 logs.go:282] 0 containers: []
	W1124 13:47:36.425856  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:47:36.425864  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:47:36.425945  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:47:36.461097  572647 cri.go:89] found id: ""
	I1124 13:47:36.461127  572647 logs.go:282] 0 containers: []
	W1124 13:47:36.461139  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:47:36.461153  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:47:36.461172  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:36.499983  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:47:36.500029  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:47:36.521192  572647 logs.go:123] Gathering logs for kube-apiserver [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8] ...
	I1124 13:47:36.521223  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:36.557807  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:47:36.557859  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:47:36.611092  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:47:36.611122  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:47:36.647506  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:47:36.647538  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:47:36.773107  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:47:36.773142  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:47:36.847612  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:47:36.847637  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:47:36.847662  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:36.887116  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:47:36.887154  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:36.924700  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:47:36.924746  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:36.974655  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:47:36.974689  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:37.017086  572647 logs.go:123] Gathering logs for kube-controller-manager [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf] ...
	I1124 13:47:37.017118  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:39.548013  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:47:39.548547  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:47:39.548616  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:47:39.548676  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:47:39.577831  572647 cri.go:89] found id: "6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:39.577852  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:39.577857  572647 cri.go:89] found id: ""
	I1124 13:47:39.577867  572647 logs.go:282] 2 containers: [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:47:39.577947  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.582354  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.586625  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:47:39.586710  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:47:39.614522  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:39.614543  572647 cri.go:89] found id: ""
	I1124 13:47:39.614552  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:47:39.614607  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.619054  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:47:39.619127  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:47:39.646326  572647 cri.go:89] found id: ""
	I1124 13:47:39.646352  572647 logs.go:282] 0 containers: []
	W1124 13:47:39.646363  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:47:39.646370  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:47:39.646429  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:47:39.672725  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:39.672745  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:39.672749  572647 cri.go:89] found id: ""
	I1124 13:47:39.672757  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:47:39.672814  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.677191  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.681175  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:47:39.681258  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:47:39.708431  572647 cri.go:89] found id: ""
	I1124 13:47:39.708455  572647 logs.go:282] 0 containers: []
	W1124 13:47:39.708464  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:47:39.708470  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:47:39.708519  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:47:39.740642  572647 cri.go:89] found id: "daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:39.740666  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:39.740672  572647 cri.go:89] found id: ""
	I1124 13:47:39.740682  572647 logs.go:282] 2 containers: [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:47:39.740749  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.745558  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:39.749963  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:47:39.750090  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:47:39.785165  572647 cri.go:89] found id: ""
	I1124 13:47:39.785200  572647 logs.go:282] 0 containers: []
	W1124 13:47:39.785213  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:47:39.785223  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:47:39.785297  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:47:39.816314  572647 cri.go:89] found id: ""
	I1124 13:47:39.816344  572647 logs.go:282] 0 containers: []
	W1124 13:47:39.816356  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:47:39.816369  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:47:39.816386  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:39.855047  572647 logs.go:123] Gathering logs for kube-controller-manager [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf] ...
	I1124 13:47:39.855082  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:39.884850  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:47:39.884886  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:39.923160  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:47:39.923209  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:47:40.011551  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:47:40.011587  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:47:40.028754  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:47:40.028784  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:47:40.073406  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:47:40.073463  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:47:40.118088  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:47:40.118130  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:47:40.186938  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:47:40.186963  572647 logs.go:123] Gathering logs for kube-apiserver [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8] ...
	I1124 13:47:40.186979  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:40.225544  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:47:40.225575  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:40.264167  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:47:40.264212  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:40.310248  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:47:40.310285  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:40.101111  608917 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-608395
	
	I1124 13:47:40.101196  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.122644  608917 main.go:143] libmachine: Using SSH client type: native
	I1124 13:47:40.122921  608917 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1124 13:47:40.122949  608917 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-608395' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-608395/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-608395' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:47:40.280196  608917 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:47:40.280226  608917 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-370498/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-370498/.minikube}
	I1124 13:47:40.280268  608917 ubuntu.go:190] setting up certificates
	I1124 13:47:40.280293  608917 provision.go:84] configureAuth start
	I1124 13:47:40.280380  608917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-608395
	I1124 13:47:40.303469  608917 provision.go:143] copyHostCerts
	I1124 13:47:40.303532  608917 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem, removing ...
	I1124 13:47:40.303543  608917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem
	I1124 13:47:40.303590  608917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem (1082 bytes)
	I1124 13:47:40.303726  608917 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem, removing ...
	I1124 13:47:40.303739  608917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem
	I1124 13:47:40.303772  608917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem (1123 bytes)
	I1124 13:47:40.303856  608917 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem, removing ...
	I1124 13:47:40.303868  608917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem
	I1124 13:47:40.303892  608917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem (1675 bytes)
	I1124 13:47:40.303983  608917 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem org=jenkins.no-preload-608395 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-608395]
	I1124 13:47:40.375070  608917 provision.go:177] copyRemoteCerts
	I1124 13:47:40.375131  608917 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:47:40.375180  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.394610  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:47:40.501959  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 13:47:40.523137  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 13:47:40.542279  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 13:47:40.562226  608917 provision.go:87] duration metric: took 281.905194ms to configureAuth
	I1124 13:47:40.562265  608917 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:47:40.562572  608917 config.go:182] Loaded profile config "no-preload-608395": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:47:40.562595  608917 machine.go:97] duration metric: took 3.832793094s to provisionDockerMachine
	I1124 13:47:40.562604  608917 client.go:176] duration metric: took 5.273718281s to LocalClient.Create
	I1124 13:47:40.562649  608917 start.go:167] duration metric: took 5.273809151s to libmachine.API.Create "no-preload-608395"
	I1124 13:47:40.562659  608917 start.go:293] postStartSetup for "no-preload-608395" (driver="docker")
	I1124 13:47:40.562671  608917 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:47:40.562721  608917 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:47:40.562769  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.582715  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:47:40.688873  608917 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:47:40.692683  608917 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:47:40.692717  608917 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:47:40.692818  608917 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/addons for local assets ...
	I1124 13:47:40.692947  608917 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/files for local assets ...
	I1124 13:47:40.693078  608917 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem -> 3741222.pem in /etc/ssl/certs
	I1124 13:47:40.693208  608917 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 13:47:40.702139  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:47:40.725883  608917 start.go:296] duration metric: took 163.205649ms for postStartSetup
	I1124 13:47:40.726376  608917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-608395
	I1124 13:47:40.744526  608917 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/config.json ...
	I1124 13:47:40.745022  608917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:47:40.745098  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.763260  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:47:40.869180  608917 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:47:40.874423  608917 start.go:128] duration metric: took 5.58807074s to createHost
	I1124 13:47:40.874458  608917 start.go:83] releasing machines lock for "no-preload-608395", held for 5.58825096s
	I1124 13:47:40.874540  608917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-608395
	I1124 13:47:40.896709  608917 ssh_runner.go:195] Run: cat /version.json
	I1124 13:47:40.896763  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.896807  608917 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:47:40.896904  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:47:40.918859  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:47:40.920576  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:47:41.084454  608917 ssh_runner.go:195] Run: systemctl --version
	I1124 13:47:41.091582  608917 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:47:41.097406  608917 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:47:41.097478  608917 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:47:41.125540  608917 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 13:47:41.125566  608917 start.go:496] detecting cgroup driver to use...
	I1124 13:47:41.125601  608917 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 13:47:41.125650  608917 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 13:47:41.148294  608917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 13:47:41.167664  608917 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:47:41.167740  608917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:47:41.189235  608917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:47:41.213594  608917 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:47:41.336134  608917 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:47:41.426955  608917 docker.go:234] disabling docker service ...
	I1124 13:47:41.427023  608917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:47:41.448189  608917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:47:41.462073  608917 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:47:41.548298  608917 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:47:41.635202  608917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:47:41.649149  608917 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:47:41.664451  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 13:47:41.676460  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 13:47:41.686131  608917 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 13:47:41.686199  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 13:47:41.695720  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:47:41.705503  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 13:47:41.714879  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:47:41.724369  608917 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:47:41.733131  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 13:47:41.742525  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 13:47:41.751826  608917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 13:47:41.762473  608917 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:47:41.770755  608917 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:47:41.779154  608917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:47:41.869150  608917 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 13:47:41.957807  608917 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 13:47:41.957876  608917 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 13:47:41.965431  608917 start.go:564] Will wait 60s for crictl version
	I1124 13:47:41.965500  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:41.970973  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:47:42.001317  608917 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 13:47:42.001405  608917 ssh_runner.go:195] Run: containerd --version
	I1124 13:47:42.026320  608917 ssh_runner.go:195] Run: containerd --version
	I1124 13:47:42.052318  608917 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 13:47:40.896022  607669 cli_runner.go:164] Run: docker network inspect old-k8s-version-513442 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:47:40.918522  607669 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 13:47:40.923315  607669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:47:40.935781  607669 kubeadm.go:884] updating cluster {Name:old-k8s-version-513442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-513442 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:47:40.935932  607669 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 13:47:40.935998  607669 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:47:40.965650  607669 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 13:47:40.965689  607669 containerd.go:534] Images already preloaded, skipping extraction
	I1124 13:47:40.965773  607669 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:47:40.999412  607669 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 13:47:40.999441  607669 cache_images.go:86] Images are preloaded, skipping loading
	I1124 13:47:40.999451  607669 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 containerd true true} ...
	I1124 13:47:40.999568  607669 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-513442 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-513442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:47:40.999640  607669 ssh_runner.go:195] Run: sudo crictl info
	I1124 13:47:41.030216  607669 cni.go:84] Creating CNI manager for ""
	I1124 13:47:41.030250  607669 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:47:41.030273  607669 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:47:41.030304  607669 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-513442 NodeName:old-k8s-version-513442 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:47:41.030479  607669 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-513442"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:47:41.030593  607669 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 13:47:41.040496  607669 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 13:47:41.040574  607669 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:47:41.048965  607669 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1124 13:47:41.063246  607669 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:47:41.080199  607669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I1124 13:47:41.095141  607669 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:47:41.099735  607669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:47:41.111816  607669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:47:41.205774  607669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:47:41.229647  607669 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442 for IP: 192.168.94.2
	I1124 13:47:41.229678  607669 certs.go:195] generating shared ca certs ...
	I1124 13:47:41.229702  607669 certs.go:227] acquiring lock for ca certs: {Name:mk5874497fda855b1e2ff816147ffdfbc44946ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.229867  607669 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.key
	I1124 13:47:41.229906  607669 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.key
	I1124 13:47:41.229935  607669 certs.go:257] generating profile certs ...
	I1124 13:47:41.230010  607669 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.key
	I1124 13:47:41.230025  607669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.crt with IP's: []
	I1124 13:47:41.438692  607669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.crt ...
	I1124 13:47:41.438735  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.crt: {Name:mkbb44e092f1569b20ffeeea6d19871e0c7ea39c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.438903  607669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.key ...
	I1124 13:47:41.438942  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.key: {Name:mkcdbea7ce1dc4681fc91bbc4b78d2c028c94687 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.439100  607669 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key.eabc0cb4
	I1124 13:47:41.439127  607669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt.eabc0cb4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1124 13:47:41.518895  607669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt.eabc0cb4 ...
	I1124 13:47:41.518941  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt.eabc0cb4: {Name:mk47b90333d21f736ed33504f6da28b133242551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.519134  607669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key.eabc0cb4 ...
	I1124 13:47:41.519153  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key.eabc0cb4: {Name:mk4592466df77ceb7a68fa27e5f9a0201b1a8063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.519239  607669 certs.go:382] copying /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt.eabc0cb4 -> /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt
	I1124 13:47:41.519312  607669 certs.go:386] copying /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key.eabc0cb4 -> /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key
	I1124 13:47:41.519368  607669 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.key
	I1124 13:47:41.519388  607669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.crt with IP's: []
	I1124 13:47:41.757186  607669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.crt ...
	I1124 13:47:41.757217  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.crt: {Name:mkb434108adbee544176aebf04c9ed8a63b76175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.757418  607669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.key ...
	I1124 13:47:41.757442  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.key: {Name:mk640e3789cee888121bd6cc947590ae24e90dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:41.757683  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122.pem (1338 bytes)
	W1124 13:47:41.757725  607669 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122_empty.pem, impossibly tiny 0 bytes
	I1124 13:47:41.757736  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 13:47:41.757777  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem (1082 bytes)
	I1124 13:47:41.757814  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:47:41.757849  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem (1675 bytes)
	I1124 13:47:41.757940  607669 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:47:41.758610  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:47:41.778634  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 13:47:41.799349  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:47:41.825279  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 13:47:41.844900  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 13:47:41.865036  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 13:47:41.887428  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:47:41.912645  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 13:47:41.937284  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:47:41.966303  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122.pem --> /usr/share/ca-certificates/374122.pem (1338 bytes)
	I1124 13:47:41.989056  607669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /usr/share/ca-certificates/3741222.pem (1708 bytes)
	I1124 13:47:42.011989  607669 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:47:42.027976  607669 ssh_runner.go:195] Run: openssl version
	I1124 13:47:42.036340  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3741222.pem && ln -fs /usr/share/ca-certificates/3741222.pem /etc/ssl/certs/3741222.pem"
	I1124 13:47:42.046698  607669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3741222.pem
	I1124 13:47:42.051406  607669 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:20 /usr/share/ca-certificates/3741222.pem
	I1124 13:47:42.051481  607669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3741222.pem
	I1124 13:47:42.089903  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3741222.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 13:47:42.100357  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:47:42.110986  607669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:42.115955  607669 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:42.116031  607669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:42.153310  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:47:42.163209  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/374122.pem && ln -fs /usr/share/ca-certificates/374122.pem /etc/ssl/certs/374122.pem"
	I1124 13:47:42.173625  607669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/374122.pem
	I1124 13:47:42.178229  607669 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:20 /usr/share/ca-certificates/374122.pem
	I1124 13:47:42.178308  607669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/374122.pem
	I1124 13:47:42.216281  607669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/374122.pem /etc/ssl/certs/51391683.0"
	I1124 13:47:42.228415  607669 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:47:42.232854  607669 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:47:42.232959  607669 kubeadm.go:401] StartCluster: {Name:old-k8s-version-513442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-513442 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:47:42.233058  607669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 13:47:42.233119  607669 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:47:42.262130  607669 cri.go:89] found id: ""
	I1124 13:47:42.262225  607669 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:47:42.271622  607669 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:47:42.280568  607669 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 13:47:42.280637  607669 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:47:42.289222  607669 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:47:42.289241  607669 kubeadm.go:158] found existing configuration files:
	
	I1124 13:47:42.289287  607669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:47:42.297481  607669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:47:42.297560  607669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:47:42.306305  607669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:47:42.315150  607669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:47:42.315224  607669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:47:42.324595  607669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:47:42.333840  607669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:47:42.333922  607669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:47:42.344021  607669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:47:42.355171  607669 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:47:42.355226  607669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:47:42.364345  607669 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 13:47:42.433190  607669 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1124 13:47:42.433270  607669 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:47:42.487608  607669 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 13:47:42.487695  607669 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 13:47:42.487758  607669 kubeadm.go:319] OS: Linux
	I1124 13:47:42.487823  607669 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 13:47:42.487892  607669 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 13:47:42.487986  607669 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 13:47:42.488057  607669 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 13:47:42.488125  607669 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 13:47:42.488216  607669 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 13:47:42.488285  607669 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 13:47:42.488352  607669 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 13:47:42.585565  607669 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:47:42.585750  607669 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:47:42.585896  607669 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1124 13:47:42.762435  607669 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:47:42.054673  608917 cli_runner.go:164] Run: docker network inspect no-preload-608395 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:47:42.073094  608917 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1124 13:47:42.078208  608917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:47:42.089858  608917 kubeadm.go:884] updating cluster {Name:no-preload-608395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-608395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:47:42.090126  608917 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 13:47:42.090181  608917 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:47:42.117576  608917 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1124 13:47:42.117601  608917 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1124 13:47:42.117671  608917 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:42.117683  608917 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.117696  608917 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 13:47:42.117708  608917 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.117683  608917 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.117737  608917 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.117738  608917 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.117773  608917 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.119957  608917 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.120028  608917 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.120041  608917 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.120103  608917 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.120144  608917 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.120206  608917 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 13:47:42.120361  608917 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:42.120651  608917 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.324599  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1124 13:47:42.324658  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.329752  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1124 13:47:42.329811  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1124 13:47:42.340410  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1124 13:47:42.340483  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.345994  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1124 13:47:42.346082  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.350632  608917 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1124 13:47:42.350771  608917 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.350861  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.354889  608917 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1124 13:47:42.355021  608917 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1124 13:47:42.355078  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.365506  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1124 13:47:42.365584  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.370164  608917 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1124 13:47:42.370246  608917 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.370299  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.371573  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.371569  608917 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1124 13:47:42.371633  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 13:47:42.371663  608917 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.371700  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.383984  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115"
	I1124 13:47:42.384064  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.391339  608917 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1124 13:47:42.391424  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.394058  608917 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1124 13:47:42.394107  608917 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.394139  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.394173  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.394139  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.410796  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 13:47:42.412029  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.415223  608917 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1124 13:47:42.415273  608917 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.415318  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.430558  608917 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1124 13:47:42.430610  608917 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.430661  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:42.432115  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.432240  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.432710  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.449068  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 13:47:42.451309  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.451333  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.451434  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:47:42.471426  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:47:42.471426  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.472006  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:47:42.507575  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1124 13:47:42.507696  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1124 13:47:42.507737  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.507752  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.507776  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1124 13:47:42.507812  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 13:47:42.512031  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1124 13:47:42.512160  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1124 13:47:42.512183  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:47:42.512220  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1124 13:47:42.512281  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1124 13:47:42.542249  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1124 13:47:42.542293  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1124 13:47:42.542356  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:47:42.542419  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1124 13:47:42.542436  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1124 13:47:42.542450  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1124 13:47:42.542460  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1124 13:47:42.542482  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1124 13:47:42.542522  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1124 13:47:42.542541  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1124 13:47:42.547506  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1124 13:47:42.547609  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 13:47:42.591222  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1124 13:47:42.591265  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1124 13:47:42.591339  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1124 13:47:42.591358  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1124 13:47:42.630891  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1124 13:47:42.630960  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1124 13:47:42.635881  608917 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1124 13:47:42.635984  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1124 13:47:42.696822  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1124 13:47:42.696868  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1124 13:47:42.696964  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1124 13:47:42.696987  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1124 13:47:42.855586  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1124 13:47:43.017613  608917 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1124 13:47:43.017692  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1124 13:47:43.363331  608917 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1124 13:47:43.363429  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:44.322473  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.304751727s)
	I1124 13:47:44.322506  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1124 13:47:44.322534  608917 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1124 13:47:44.322535  608917 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1124 13:47:44.322572  608917 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:44.322581  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1124 13:47:44.322611  608917 ssh_runner.go:195] Run: which crictl
	I1124 13:47:44.327186  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:42.765072  607669 out.go:252]   - Generating certificates and keys ...
	I1124 13:47:42.765189  607669 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 13:47:42.765429  607669 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 13:47:42.918631  607669 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 13:47:43.145530  607669 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 13:47:43.262863  607669 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 13:47:43.516853  607669 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 13:47:43.680193  607669 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 13:47:43.680382  607669 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-513442] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 13:47:43.927450  607669 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 13:47:43.927668  607669 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-513442] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 13:47:44.210866  607669 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 13:47:44.444469  607669 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 13:47:44.571652  607669 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 13:47:44.571791  607669 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 13:47:44.658495  607669 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 13:47:44.899827  607669 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 13:47:45.259836  607669 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 13:47:45.407067  607669 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 13:47:45.407645  607669 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 13:47:45.412109  607669 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 13:47:42.868629  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:47:45.407011  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.084400483s)
	I1124 13:47:45.407048  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1124 13:47:45.407074  608917 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1124 13:47:45.407121  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1124 13:47:45.407011  608917 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.079785919s)
	I1124 13:47:45.407225  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:46.754417  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.347254819s)
	I1124 13:47:46.754464  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1124 13:47:46.754487  608917 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 13:47:46.754539  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 13:47:46.754423  608917 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.34716741s)
	I1124 13:47:46.754625  608917 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:47:46.791381  608917 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1124 13:47:46.791500  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1124 13:47:48.250258  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.49567347s)
	I1124 13:47:48.250293  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1124 13:47:48.250322  608917 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 13:47:48.250369  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 13:47:48.250393  608917 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.458859359s)
	I1124 13:47:48.250436  608917 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1124 13:47:48.250458  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1124 13:47:49.525346  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.274952475s)
	I1124 13:47:49.525372  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1124 13:47:49.525397  608917 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1124 13:47:49.525432  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1124 13:47:45.413783  607669 out.go:252]   - Booting up control plane ...
	I1124 13:47:45.414000  607669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 13:47:45.414122  607669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 13:47:45.415606  607669 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 13:47:45.433197  607669 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 13:47:45.434777  607669 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 13:47:45.434850  607669 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 13:47:45.555124  607669 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1124 13:47:47.870054  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 13:47:47.870131  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:47:47.870207  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:47:47.909612  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:47:47.909637  572647 cri.go:89] found id: "6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:47.909644  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:47:47.909649  572647 cri.go:89] found id: ""
	I1124 13:47:47.909660  572647 logs.go:282] 3 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:47:47.909721  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:47.915163  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:47.920826  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:47.926251  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:47:47.926326  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:47:47.968362  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:47.968399  572647 cri.go:89] found id: ""
	I1124 13:47:47.968412  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:47:47.968487  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:47.973840  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:47:47.973955  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:47:48.011120  572647 cri.go:89] found id: ""
	I1124 13:47:48.011151  572647 logs.go:282] 0 containers: []
	W1124 13:47:48.011163  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:47:48.011172  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:47:48.011242  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:47:48.049409  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:48.049433  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:48.049439  572647 cri.go:89] found id: ""
	I1124 13:47:48.049449  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:47:48.049612  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:48.055041  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:48.061717  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:47:48.061795  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:47:48.098008  572647 cri.go:89] found id: ""
	I1124 13:47:48.098036  572647 logs.go:282] 0 containers: []
	W1124 13:47:48.098048  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:47:48.098056  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:47:48.098116  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:47:48.134832  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:47:48.134858  572647 cri.go:89] found id: "daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:48.134864  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:48.134868  572647 cri.go:89] found id: ""
	I1124 13:47:48.134879  572647 logs.go:282] 3 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:47:48.134960  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:48.140512  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:48.146067  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:47:48.151167  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:47:48.151293  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:47:48.194241  572647 cri.go:89] found id: ""
	I1124 13:47:48.194275  572647 logs.go:282] 0 containers: []
	W1124 13:47:48.194287  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:47:48.194297  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:47:48.194366  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:47:48.235586  572647 cri.go:89] found id: ""
	I1124 13:47:48.235617  572647 logs.go:282] 0 containers: []
	W1124 13:47:48.235629  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:47:48.235644  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:47:48.235660  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:47:48.322131  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:47:48.322175  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:47:48.358925  572647 logs.go:123] Gathering logs for kube-controller-manager [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf] ...
	I1124 13:47:48.358964  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:47:48.399403  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:47:48.399439  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:47:48.442576  572647 logs.go:123] Gathering logs for kube-apiserver [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8] ...
	I1124 13:47:48.442621  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:47:48.490297  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:47:48.490336  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:47:48.543239  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:47:48.543277  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:47:48.591561  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:47:48.591604  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:47:48.639975  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:47:48.640012  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:47:48.703335  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:47:48.703393  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:47:48.760778  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:47:48.760820  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:47:48.887283  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:47:48.887328  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:47:48.915138  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:47:48.915177  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1124 13:47:50.557442  607669 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.002632 seconds
	I1124 13:47:50.557627  607669 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 13:47:50.572390  607669 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 13:47:51.098533  607669 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 13:47:51.098764  607669 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-513442 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 13:47:51.610053  607669 kubeadm.go:319] [bootstrap-token] Using token: eki30b.4i7191y9601t9kqb
	I1124 13:47:51.611988  607669 out.go:252]   - Configuring RBAC rules ...
	I1124 13:47:51.612142  607669 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 13:47:51.618056  607669 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 13:47:51.627751  607669 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 13:47:51.631902  607669 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 13:47:51.635666  607669 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 13:47:51.643042  607669 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 13:47:51.655046  607669 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 13:47:51.879254  607669 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 13:47:52.022857  607669 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 13:47:52.024273  607669 kubeadm.go:319] 
	I1124 13:47:52.024439  607669 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 13:47:52.024451  607669 kubeadm.go:319] 
	I1124 13:47:52.024565  607669 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 13:47:52.024593  607669 kubeadm.go:319] 
	I1124 13:47:52.024628  607669 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 13:47:52.024712  607669 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 13:47:52.024786  607669 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 13:47:52.024795  607669 kubeadm.go:319] 
	I1124 13:47:52.024870  607669 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 13:47:52.024880  607669 kubeadm.go:319] 
	I1124 13:47:52.024984  607669 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 13:47:52.024995  607669 kubeadm.go:319] 
	I1124 13:47:52.025066  607669 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 13:47:52.025175  607669 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 13:47:52.025273  607669 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 13:47:52.025282  607669 kubeadm.go:319] 
	I1124 13:47:52.025399  607669 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 13:47:52.025508  607669 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 13:47:52.025517  607669 kubeadm.go:319] 
	I1124 13:47:52.025633  607669 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token eki30b.4i7191y9601t9kqb \
	I1124 13:47:52.025782  607669 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:32fb1839a00503b33822b75b81c2f42d5061d18404c0a5cd12189dec7e20658c \
	I1124 13:47:52.025814  607669 kubeadm.go:319] 	--control-plane 
	I1124 13:47:52.025823  607669 kubeadm.go:319] 
	I1124 13:47:52.025955  607669 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 13:47:52.025964  607669 kubeadm.go:319] 
	I1124 13:47:52.026081  607669 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token eki30b.4i7191y9601t9kqb \
	I1124 13:47:52.026226  607669 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:32fb1839a00503b33822b75b81c2f42d5061d18404c0a5cd12189dec7e20658c 
	I1124 13:47:52.029215  607669 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 13:47:52.029395  607669 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 13:47:52.029436  607669 cni.go:84] Creating CNI manager for ""
	I1124 13:47:52.029450  607669 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:47:52.032075  607669 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 13:47:52.378094  608917 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (2.852631537s)
	I1124 13:47:52.378131  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1124 13:47:52.378164  608917 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 13:47:52.378216  608917 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1124 13:47:52.826755  608917 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-370498/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 13:47:52.826808  608917 cache_images.go:125] Successfully loaded all cached images
	I1124 13:47:52.826816  608917 cache_images.go:94] duration metric: took 10.70919772s to LoadCachedImages
	I1124 13:47:52.826831  608917 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 containerd true true} ...
	I1124 13:47:52.826984  608917 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-608395 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-608395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:47:52.827057  608917 ssh_runner.go:195] Run: sudo crictl info
	I1124 13:47:52.858503  608917 cni.go:84] Creating CNI manager for ""
	I1124 13:47:52.858531  608917 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:47:52.858557  608917 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:47:52.858588  608917 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-608395 NodeName:no-preload-608395 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:47:52.858757  608917 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-608395"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:47:52.858835  608917 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 13:47:52.869416  608917 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1124 13:47:52.869483  608917 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1124 13:47:52.881260  608917 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1124 13:47:52.881274  608917 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1124 13:47:52.881284  608917 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1124 13:47:52.881370  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1124 13:47:52.886648  608917 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1124 13:47:52.886683  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1124 13:47:53.829310  608917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:47:53.844364  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1124 13:47:53.848663  608917 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1124 13:47:53.848703  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1124 13:47:54.078871  608917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1124 13:47:54.083904  608917 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1124 13:47:54.083971  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1124 13:47:54.263727  608917 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:47:54.272819  608917 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1124 13:47:54.287533  608917 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:47:54.307319  608917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1124 13:47:54.321728  608917 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:47:54.326108  608917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:47:54.337568  608917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:47:54.423252  608917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:47:54.446892  608917 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395 for IP: 192.168.103.2
	I1124 13:47:54.446932  608917 certs.go:195] generating shared ca certs ...
	I1124 13:47:54.446950  608917 certs.go:227] acquiring lock for ca certs: {Name:mk5874497fda855b1e2ff816147ffdfbc44946ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.447115  608917 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.key
	I1124 13:47:54.447173  608917 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.key
	I1124 13:47:54.447189  608917 certs.go:257] generating profile certs ...
	I1124 13:47:54.447250  608917 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.key
	I1124 13:47:54.447265  608917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt with IP's: []
	I1124 13:47:54.480111  608917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt ...
	I1124 13:47:54.480143  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt: {Name:mk0373d89f453529126dca865f8c4273a9b76c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.480318  608917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.key ...
	I1124 13:47:54.480326  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.key: {Name:mkd4fd6c97a850045d4415dcd6682504ca05b6b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.480412  608917 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key.211f6cd0
	I1124 13:47:54.480432  608917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt.211f6cd0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1124 13:47:54.564575  608917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt.211f6cd0 ...
	I1124 13:47:54.564606  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt.211f6cd0: {Name:mk39921501aaa8b9dfdaa0c59584189fbc232834 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.564812  608917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key.211f6cd0 ...
	I1124 13:47:54.564832  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key.211f6cd0: {Name:mk1e5ec23cae444088ab39a7c9f4bd7f0b68695e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.565002  608917 certs.go:382] copying /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt.211f6cd0 -> /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt
	I1124 13:47:54.565092  608917 certs.go:386] copying /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key.211f6cd0 -> /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key
	I1124 13:47:54.565147  608917 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.key
	I1124 13:47:54.565166  608917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.crt with IP's: []
	I1124 13:47:54.682010  608917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.crt ...
	I1124 13:47:54.682042  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.crt: {Name:mk61707e6277a856c1f1cee667479489cd8cfc56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.682251  608917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.key ...
	I1124 13:47:54.682270  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.key: {Name:mkdc07f88aff1f58330c9757ac629acf2062c9ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:47:54.682520  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122.pem (1338 bytes)
	W1124 13:47:54.682564  608917 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122_empty.pem, impossibly tiny 0 bytes
	I1124 13:47:54.682574  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 13:47:54.682602  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem (1082 bytes)
	I1124 13:47:54.682626  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:47:54.682651  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem (1675 bytes)
	I1124 13:47:54.682697  608917 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:47:54.683371  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:47:54.703387  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 13:47:54.722770  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:47:54.743107  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 13:47:54.763697  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 13:47:54.783164  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 13:47:54.802752  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:47:54.822653  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 13:47:54.843126  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122.pem --> /usr/share/ca-certificates/374122.pem (1338 bytes)
	I1124 13:47:54.867619  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /usr/share/ca-certificates/3741222.pem (1708 bytes)
	I1124 13:47:54.887814  608917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:47:54.907876  608917 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:47:54.922379  608917 ssh_runner.go:195] Run: openssl version
	I1124 13:47:54.929636  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/374122.pem && ln -fs /usr/share/ca-certificates/374122.pem /etc/ssl/certs/374122.pem"
	I1124 13:47:54.940237  608917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/374122.pem
	I1124 13:47:54.944856  608917 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:20 /usr/share/ca-certificates/374122.pem
	I1124 13:47:54.944961  608917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/374122.pem
	I1124 13:47:54.983788  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/374122.pem /etc/ssl/certs/51391683.0"
	I1124 13:47:54.994031  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3741222.pem && ln -fs /usr/share/ca-certificates/3741222.pem /etc/ssl/certs/3741222.pem"
	I1124 13:47:55.004849  608917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3741222.pem
	I1124 13:47:55.010168  608917 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:20 /usr/share/ca-certificates/3741222.pem
	I1124 13:47:55.010231  608917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3741222.pem
	I1124 13:47:55.048930  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3741222.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 13:47:55.058618  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:47:55.068496  608917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:52.033462  607669 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 13:47:52.040052  607669 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1124 13:47:52.040080  607669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 13:47:52.058896  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 13:47:52.863538  607669 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 13:47:52.863612  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:52.863691  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-513442 minikube.k8s.io/updated_at=2025_11_24T13_47_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=old-k8s-version-513442 minikube.k8s.io/primary=true
	I1124 13:47:52.876635  607669 ops.go:34] apiserver oom_adj: -16
	I1124 13:47:52.948231  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:53.449196  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:53.948546  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:54.448277  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:54.949098  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:55.073505  608917 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:55.073568  608917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:47:55.110353  608917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:47:55.120226  608917 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:47:55.124508  608917 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:47:55.124574  608917 kubeadm.go:401] StartCluster: {Name:no-preload-608395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-608395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:47:55.124676  608917 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 13:47:55.124734  608917 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:47:55.153610  608917 cri.go:89] found id: ""
	I1124 13:47:55.153686  608917 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:47:55.163237  608917 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:47:55.172281  608917 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 13:47:55.172352  608917 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:47:55.181432  608917 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:47:55.181458  608917 kubeadm.go:158] found existing configuration files:
	
	I1124 13:47:55.181515  608917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:47:55.190814  608917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:47:55.190897  608917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:47:55.200577  608917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:47:55.210272  608917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:47:55.210344  608917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:47:55.219990  608917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:47:55.228828  608917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:47:55.228885  608917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:47:55.238104  608917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:47:55.246631  608917 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:47:55.246745  608917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:47:55.255509  608917 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 13:47:55.316154  608917 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 13:47:55.376542  608917 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 13:47:55.448626  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:55.949156  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:56.449055  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:56.949140  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:57.448946  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:57.948732  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:58.448437  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:58.948803  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:59.449172  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:59.948946  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:47:59.001079  572647 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.085873793s)
	W1124 13:47:59.001127  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1124 13:47:59.001145  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:47:59.001163  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:00.448856  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:00.948957  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:01.448664  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:01.948985  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:02.448486  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:02.948890  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:03.448380  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:03.948515  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:04.448564  607669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:04.527535  607669 kubeadm.go:1114] duration metric: took 11.66399569s to wait for elevateKubeSystemPrivileges
	I1124 13:48:04.527576  607669 kubeadm.go:403] duration metric: took 22.29462596s to StartCluster
	I1124 13:48:04.527612  607669 settings.go:142] acquiring lock: {Name:mka599a3c9bae62ffb84d261186583052ce40f68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:48:04.527702  607669 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:48:04.529054  607669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/kubeconfig: {Name:mk44e8f04ffd8592063c19ad1e339ad14aaa66a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:48:04.529299  607669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 13:48:04.529306  607669 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 13:48:04.529383  607669 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 13:48:04.529498  607669 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-513442"
	I1124 13:48:04.529517  607669 config.go:182] Loaded profile config "old-k8s-version-513442": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 13:48:04.529519  607669 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-513442"
	I1124 13:48:04.529535  607669 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-513442"
	I1124 13:48:04.529561  607669 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-513442"
	I1124 13:48:04.529641  607669 host.go:66] Checking if "old-k8s-version-513442" exists ...
	I1124 13:48:04.529946  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:48:04.530180  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:48:04.531152  607669 out.go:179] * Verifying Kubernetes components...
	I1124 13:48:04.532717  607669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:48:04.557008  607669 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:48:04.558405  607669 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:48:04.558429  607669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 13:48:04.558495  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:48:04.562314  607669 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-513442"
	I1124 13:48:04.562381  607669 host.go:66] Checking if "old-k8s-version-513442" exists ...
	I1124 13:48:04.563175  607669 cli_runner.go:164] Run: docker container inspect old-k8s-version-513442 --format={{.State.Status}}
	I1124 13:48:04.584062  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:48:04.598587  607669 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 13:48:04.598613  607669 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 13:48:04.598683  607669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-513442
	I1124 13:48:04.628606  607669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/old-k8s-version-513442/id_rsa Username:docker}
	I1124 13:48:04.653771  607669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 13:48:04.701037  607669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:48:04.714197  607669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:48:04.765729  607669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 13:48:04.912320  607669 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1124 13:48:04.913621  607669 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-513442" to be "Ready" ...
	I1124 13:48:05.136398  607669 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 13:48:05.160590  608917 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 13:48:05.160664  608917 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:48:05.160771  608917 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 13:48:05.160854  608917 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 13:48:05.160886  608917 kubeadm.go:319] OS: Linux
	I1124 13:48:05.160993  608917 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 13:48:05.161038  608917 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 13:48:05.161128  608917 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 13:48:05.161215  608917 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 13:48:05.161290  608917 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 13:48:05.161348  608917 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 13:48:05.161407  608917 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 13:48:05.161478  608917 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 13:48:05.161607  608917 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:48:05.161758  608917 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:48:05.161894  608917 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 13:48:05.162009  608917 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:48:05.163691  608917 out.go:252]   - Generating certificates and keys ...
	I1124 13:48:05.163805  608917 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 13:48:05.163947  608917 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 13:48:05.164054  608917 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 13:48:05.164154  608917 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 13:48:05.164250  608917 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 13:48:05.164325  608917 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 13:48:05.164403  608917 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 13:48:05.164579  608917 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-608395] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1124 13:48:05.164662  608917 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 13:48:05.164844  608917 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-608395] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1124 13:48:05.164993  608917 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 13:48:05.165088  608917 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 13:48:05.165130  608917 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 13:48:05.165182  608917 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 13:48:05.165250  608917 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 13:48:05.165313  608917 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 13:48:05.165382  608917 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 13:48:05.165456  608917 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 13:48:05.165506  608917 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 13:48:05.165580  608917 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 13:48:05.165637  608917 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 13:48:05.167858  608917 out.go:252]   - Booting up control plane ...
	I1124 13:48:05.167962  608917 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 13:48:05.168043  608917 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 13:48:05.168104  608917 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 13:48:05.168199  608917 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 13:48:05.168298  608917 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 13:48:05.168436  608917 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 13:48:05.168514  608917 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 13:48:05.168558  608917 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 13:48:05.168715  608917 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 13:48:05.168854  608917 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 13:48:05.168953  608917 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001985013s
	I1124 13:48:05.169093  608917 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 13:48:05.169202  608917 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1124 13:48:05.169339  608917 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 13:48:05.169461  608917 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 13:48:05.169582  608917 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.171045551s
	I1124 13:48:05.169691  608917 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.746683308s
	I1124 13:48:05.169782  608917 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.002983514s
	I1124 13:48:05.169958  608917 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 13:48:05.170079  608917 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 13:48:05.170136  608917 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 13:48:05.170449  608917 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-608395 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 13:48:05.170534  608917 kubeadm.go:319] [bootstrap-token] Using token: 0m3tk6.bp5t9g266aj6zg5e
	I1124 13:48:05.172344  608917 out.go:252]   - Configuring RBAC rules ...
	I1124 13:48:05.172497  608917 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 13:48:05.172606  608917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 13:48:05.172790  608917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 13:48:05.172947  608917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 13:48:05.173067  608917 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 13:48:05.173152  608917 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 13:48:05.173251  608917 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 13:48:05.173290  608917 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 13:48:05.173330  608917 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 13:48:05.173336  608917 kubeadm.go:319] 
	I1124 13:48:05.173391  608917 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 13:48:05.173397  608917 kubeadm.go:319] 
	I1124 13:48:05.173470  608917 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 13:48:05.173476  608917 kubeadm.go:319] 
	I1124 13:48:05.173498  608917 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 13:48:05.173553  608917 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 13:48:05.173610  608917 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 13:48:05.173623  608917 kubeadm.go:319] 
	I1124 13:48:05.173669  608917 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 13:48:05.173675  608917 kubeadm.go:319] 
	I1124 13:48:05.173718  608917 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 13:48:05.173727  608917 kubeadm.go:319] 
	I1124 13:48:05.173778  608917 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 13:48:05.173858  608917 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 13:48:05.173981  608917 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 13:48:05.173990  608917 kubeadm.go:319] 
	I1124 13:48:05.174085  608917 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 13:48:05.174165  608917 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 13:48:05.174170  608917 kubeadm.go:319] 
	I1124 13:48:05.174250  608917 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0m3tk6.bp5t9g266aj6zg5e \
	I1124 13:48:05.174352  608917 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:32fb1839a00503b33822b75b81c2f42d5061d18404c0a5cd12189dec7e20658c \
	I1124 13:48:05.174376  608917 kubeadm.go:319] 	--control-plane 
	I1124 13:48:05.174381  608917 kubeadm.go:319] 
	I1124 13:48:05.174459  608917 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 13:48:05.174465  608917 kubeadm.go:319] 
	I1124 13:48:05.174560  608917 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0m3tk6.bp5t9g266aj6zg5e \
	I1124 13:48:05.174802  608917 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:32fb1839a00503b33822b75b81c2f42d5061d18404c0a5cd12189dec7e20658c 
	I1124 13:48:05.174826  608917 cni.go:84] Creating CNI manager for ""
	I1124 13:48:05.174836  608917 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:48:05.177484  608917 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 13:48:05.137677  607669 addons.go:530] duration metric: took 608.290782ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 13:48:01.553682  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:02.346718  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:51122->192.168.76.2:8443: read: connection reset by peer
	I1124 13:48:02.346797  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:02.346868  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:02.379430  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:02.379461  572647 cri.go:89] found id: "6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:48:02.379468  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:02.379472  572647 cri.go:89] found id: ""
	I1124 13:48:02.379481  572647 logs.go:282] 3 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:02.379554  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.384666  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.389028  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.393413  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:02.393493  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:02.423298  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:02.423317  572647 cri.go:89] found id: ""
	I1124 13:48:02.423325  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:02.423377  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.428323  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:02.428396  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:02.458971  572647 cri.go:89] found id: ""
	I1124 13:48:02.459002  572647 logs.go:282] 0 containers: []
	W1124 13:48:02.459014  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:02.459023  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:02.459136  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:02.495221  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:02.495253  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:02.495258  572647 cri.go:89] found id: ""
	I1124 13:48:02.495267  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:02.495325  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.504536  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.513709  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:02.513782  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:02.545556  572647 cri.go:89] found id: ""
	I1124 13:48:02.545589  572647 logs.go:282] 0 containers: []
	W1124 13:48:02.545603  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:02.545613  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:02.545686  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:02.575683  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:02.575710  572647 cri.go:89] found id: "daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:48:02.575714  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:02.575717  572647 cri.go:89] found id: ""
	I1124 13:48:02.575725  572647 logs.go:282] 3 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:02.575799  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.580340  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.584784  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:02.588717  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:02.588774  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:02.617522  572647 cri.go:89] found id: ""
	I1124 13:48:02.617550  572647 logs.go:282] 0 containers: []
	W1124 13:48:02.617558  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:02.617567  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:02.617616  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:02.647375  572647 cri.go:89] found id: ""
	I1124 13:48:02.647407  572647 logs.go:282] 0 containers: []
	W1124 13:48:02.647418  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:02.647432  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:02.647445  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:02.685850  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:02.685900  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:02.794118  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:02.794164  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:02.866960  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:02.866982  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:02.866997  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:02.908627  572647 logs.go:123] Gathering logs for kube-apiserver [6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8] ...
	I1124 13:48:02.908671  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ba099dbfe03c53cb7a40393cab6635322c5372979bf7ba6869730b7b76a01e8"
	I1124 13:48:02.949348  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:02.949380  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:02.997498  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:02.997541  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:03.065816  572647 logs.go:123] Gathering logs for kube-controller-manager [daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf] ...
	I1124 13:48:03.065856  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 daace7b3ca5876bbcd7819611db0917a66e6e74f443673d2d192e8840d66bcbf"
	I1124 13:48:03.101360  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:03.101393  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:03.140140  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:03.140183  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:03.160020  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:03.160058  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:03.202092  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:03.202136  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:03.247020  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:03.247060  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:03.283475  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:03.283518  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:05.832996  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:05.833478  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:05.833543  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:05.833607  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:05.862229  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:05.862254  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:05.862258  572647 cri.go:89] found id: ""
	I1124 13:48:05.862267  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:05.862320  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:05.867091  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:05.871378  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:05.871455  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:05.900338  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:05.900361  572647 cri.go:89] found id: ""
	I1124 13:48:05.900370  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:05.900428  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:05.904531  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:05.904606  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:05.933536  572647 cri.go:89] found id: ""
	I1124 13:48:05.933565  572647 logs.go:282] 0 containers: []
	W1124 13:48:05.933579  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:05.933587  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:05.933645  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:05.961942  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:05.961966  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:05.961980  572647 cri.go:89] found id: ""
	I1124 13:48:05.961988  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:05.962048  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:05.966413  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:05.970560  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:05.970640  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:05.999021  572647 cri.go:89] found id: ""
	I1124 13:48:05.999046  572647 logs.go:282] 0 containers: []
	W1124 13:48:05.999057  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:05.999065  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:05.999125  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:06.030192  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:06.030216  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:06.030222  572647 cri.go:89] found id: ""
	I1124 13:48:06.030233  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:06.030291  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:06.034509  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:06.038518  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:06.038602  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:06.067432  572647 cri.go:89] found id: ""
	I1124 13:48:06.067459  572647 logs.go:282] 0 containers: []
	W1124 13:48:06.067469  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:06.067477  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:06.067557  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:06.098683  572647 cri.go:89] found id: ""
	I1124 13:48:06.098712  572647 logs.go:282] 0 containers: []
	W1124 13:48:06.098723  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:06.098736  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:06.098753  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:06.163737  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:06.163765  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:06.163783  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:05.179143  608917 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 13:48:05.184780  608917 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 13:48:05.184802  608917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 13:48:05.199547  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 13:48:05.451312  608917 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 13:48:05.451481  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:05.451599  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-608395 minikube.k8s.io/updated_at=2025_11_24T13_48_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=no-preload-608395 minikube.k8s.io/primary=true
	I1124 13:48:05.479434  608917 ops.go:34] apiserver oom_adj: -16
	I1124 13:48:05.560179  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:06.061204  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:06.560802  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:07.061219  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:07.561139  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:08.061015  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:08.561034  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:09.061268  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:09.560397  608917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:48:09.636185  608917 kubeadm.go:1114] duration metric: took 4.184744627s to wait for elevateKubeSystemPrivileges
	I1124 13:48:09.636235  608917 kubeadm.go:403] duration metric: took 14.511667218s to StartCluster
	I1124 13:48:09.636257  608917 settings.go:142] acquiring lock: {Name:mka599a3c9bae62ffb84d261186583052ce40f68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:48:09.636332  608917 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:48:09.637980  608917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/kubeconfig: {Name:mk44e8f04ffd8592063c19ad1e339ad14aaa66a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:48:09.638233  608917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 13:48:09.638262  608917 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 13:48:09.638340  608917 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 13:48:09.638439  608917 addons.go:70] Setting storage-provisioner=true in profile "no-preload-608395"
	I1124 13:48:09.638460  608917 addons.go:239] Setting addon storage-provisioner=true in "no-preload-608395"
	I1124 13:48:09.638459  608917 addons.go:70] Setting default-storageclass=true in profile "no-preload-608395"
	I1124 13:48:09.638486  608917 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-608395"
	I1124 13:48:09.638512  608917 host.go:66] Checking if "no-preload-608395" exists ...
	I1124 13:48:09.638608  608917 config.go:182] Loaded profile config "no-preload-608395": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:48:09.638889  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:48:09.639090  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:48:09.640719  608917 out.go:179] * Verifying Kubernetes components...
	I1124 13:48:09.642235  608917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:48:09.665980  608917 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:48:09.668239  608917 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:48:09.668262  608917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 13:48:09.668334  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:48:09.668545  608917 addons.go:239] Setting addon default-storageclass=true in "no-preload-608395"
	I1124 13:48:09.668594  608917 host.go:66] Checking if "no-preload-608395" exists ...
	I1124 13:48:09.669115  608917 cli_runner.go:164] Run: docker container inspect no-preload-608395 --format={{.State.Status}}
	I1124 13:48:09.708052  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:48:09.711213  608917 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 13:48:09.711236  608917 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 13:48:09.711297  608917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-608395
	I1124 13:48:09.737250  608917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/no-preload-608395/id_rsa Username:docker}
	I1124 13:48:09.745340  608917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 13:48:09.808489  608917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:48:09.832661  608917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:48:09.863280  608917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 13:48:09.941101  608917 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1124 13:48:09.942521  608917 node_ready.go:35] waiting up to 6m0s for node "no-preload-608395" to be "Ready" ...
	I1124 13:48:10.163475  608917 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 13:48:05.418106  607669 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-513442" context rescaled to 1 replicas
	W1124 13:48:06.917478  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	W1124 13:48:09.417409  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	I1124 13:48:06.199640  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:06.199675  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:06.235793  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:06.235827  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:06.290172  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:06.290212  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:06.325935  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:06.325975  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:06.359485  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:06.359523  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:06.406787  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:06.406834  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:06.503206  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:06.503251  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:06.520877  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:06.520924  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:06.561472  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:06.561510  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:06.591722  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:06.591748  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:09.128043  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:09.128549  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:09.128609  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:09.128678  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:09.158194  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:09.158216  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:09.158220  572647 cri.go:89] found id: ""
	I1124 13:48:09.158229  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:09.158308  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.162575  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.167402  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:09.167472  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:09.196608  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:09.196633  572647 cri.go:89] found id: ""
	I1124 13:48:09.196645  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:09.196709  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.201107  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:09.201190  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:09.232265  572647 cri.go:89] found id: ""
	I1124 13:48:09.232300  572647 logs.go:282] 0 containers: []
	W1124 13:48:09.232311  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:09.232320  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:09.232386  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:09.272990  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:09.273017  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:09.273022  572647 cri.go:89] found id: ""
	I1124 13:48:09.273033  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:09.273100  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.278614  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.283409  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:09.283485  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:09.314562  572647 cri.go:89] found id: ""
	I1124 13:48:09.314592  572647 logs.go:282] 0 containers: []
	W1124 13:48:09.314604  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:09.314611  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:09.314682  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:09.346903  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:09.346963  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:09.346970  572647 cri.go:89] found id: ""
	I1124 13:48:09.346979  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:09.347049  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.351444  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:09.355601  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:09.355675  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:09.387667  572647 cri.go:89] found id: ""
	I1124 13:48:09.387697  572647 logs.go:282] 0 containers: []
	W1124 13:48:09.387709  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:09.387716  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:09.387779  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:09.417828  572647 cri.go:89] found id: ""
	I1124 13:48:09.417854  572647 logs.go:282] 0 containers: []
	W1124 13:48:09.417863  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:09.417876  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:09.417894  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:09.518663  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:09.518707  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:09.538049  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:09.538093  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:09.606209  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:09.606232  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:09.606246  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:09.646703  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:09.646736  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:09.708037  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:09.708078  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:09.779698  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:09.779735  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:09.819613  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:09.819663  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:09.867349  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:09.867388  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:09.917580  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:09.917620  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:09.959751  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:09.959793  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:10.006236  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:10.006274  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:10.165110  608917 addons.go:530] duration metric: took 526.764143ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 13:48:10.444998  608917 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-608395" context rescaled to 1 replicas
	W1124 13:48:11.948043  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	W1124 13:48:14.445721  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	W1124 13:48:11.417485  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	W1124 13:48:13.418201  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	I1124 13:48:12.563487  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:12.564031  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:12.564091  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:12.564151  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:12.598524  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:12.598553  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:12.598559  572647 cri.go:89] found id: ""
	I1124 13:48:12.598570  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:12.598654  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.603466  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.608383  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:12.608462  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:12.652395  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:12.652422  572647 cri.go:89] found id: ""
	I1124 13:48:12.652433  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:12.652503  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.657966  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:12.658060  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:12.693432  572647 cri.go:89] found id: ""
	I1124 13:48:12.693468  572647 logs.go:282] 0 containers: []
	W1124 13:48:12.693480  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:12.693489  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:12.693558  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:12.731546  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:12.731572  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:12.731579  572647 cri.go:89] found id: ""
	I1124 13:48:12.731590  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:12.731820  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.737055  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.741859  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:12.741953  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:12.776627  572647 cri.go:89] found id: ""
	I1124 13:48:12.776652  572647 logs.go:282] 0 containers: []
	W1124 13:48:12.776660  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:12.776667  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:12.776735  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:12.809077  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:12.809099  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:12.809102  572647 cri.go:89] found id: ""
	I1124 13:48:12.809112  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:12.809166  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.813963  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:12.818488  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:12.818563  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:12.852844  572647 cri.go:89] found id: ""
	I1124 13:48:12.852879  572647 logs.go:282] 0 containers: []
	W1124 13:48:12.852891  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:12.852900  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:12.853034  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:12.889177  572647 cri.go:89] found id: ""
	I1124 13:48:12.889228  572647 logs.go:282] 0 containers: []
	W1124 13:48:12.889240  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:12.889255  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:12.889278  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:12.941108  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:12.941146  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:13.012950  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:13.012998  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:13.059324  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:13.059367  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:13.096188  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:13.096235  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:13.157287  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:13.157338  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:13.198203  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:13.198250  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:13.219729  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:13.219773  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:13.293315  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:13.293338  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:13.293356  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:13.338975  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:13.339029  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:13.385546  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:13.385596  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:13.427130  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:13.427162  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:16.027717  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:16.028251  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:16.028310  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:16.028363  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:16.058811  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:16.058839  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:16.058847  572647 cri.go:89] found id: ""
	I1124 13:48:16.058858  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:16.058999  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.063797  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.068208  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:16.068282  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:16.097374  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:16.097404  572647 cri.go:89] found id: ""
	I1124 13:48:16.097416  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:16.097484  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.102967  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:16.103045  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:16.133626  572647 cri.go:89] found id: ""
	I1124 13:48:16.133660  572647 logs.go:282] 0 containers: []
	W1124 13:48:16.133670  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:16.133676  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:16.133746  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:16.165392  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:16.165424  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:16.165431  572647 cri.go:89] found id: ""
	I1124 13:48:16.165442  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:16.165507  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.170277  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.174579  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:16.174661  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	W1124 13:48:16.445831  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	W1124 13:48:18.945868  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	W1124 13:48:15.917184  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	W1124 13:48:17.917526  607669 node_ready.go:57] node "old-k8s-version-513442" has "Ready":"False" status (will retry)
	I1124 13:48:19.416721  607669 node_ready.go:49] node "old-k8s-version-513442" is "Ready"
	I1124 13:48:19.416760  607669 node_ready.go:38] duration metric: took 14.503103561s for node "old-k8s-version-513442" to be "Ready" ...
	I1124 13:48:19.416778  607669 api_server.go:52] waiting for apiserver process to appear ...
	I1124 13:48:19.416833  607669 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:48:19.430267  607669 api_server.go:72] duration metric: took 14.90093273s to wait for apiserver process to appear ...
	I1124 13:48:19.430299  607669 api_server.go:88] waiting for apiserver healthz status ...
	I1124 13:48:19.430326  607669 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 13:48:19.436844  607669 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 13:48:19.438582  607669 api_server.go:141] control plane version: v1.28.0
	I1124 13:48:19.438618  607669 api_server.go:131] duration metric: took 8.311152ms to wait for apiserver health ...
	I1124 13:48:19.438632  607669 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 13:48:19.443134  607669 system_pods.go:59] 8 kube-system pods found
	I1124 13:48:19.443191  607669 system_pods.go:61] "coredns-5dd5756b68-b5rrl" [4e6c9b7c-5f0a-4c60-8197-20e985a07403] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:19.443200  607669 system_pods.go:61] "etcd-old-k8s-version-513442" [0b1a1913-a17b-4362-af66-49436a831759] Running
	I1124 13:48:19.443207  607669 system_pods.go:61] "kindnet-tpjvb" [c7df115a-8394-4f80-ac6c-5b1fc95337b5] Running
	I1124 13:48:19.443213  607669 system_pods.go:61] "kube-apiserver-old-k8s-version-513442" [722a96a1-58fb-4240-9c3b-4732b2fc0877] Running
	I1124 13:48:19.443219  607669 system_pods.go:61] "kube-controller-manager-old-k8s-version-513442" [df7953a7-c9cf-4854-b6bb-c43b0415e709] Running
	I1124 13:48:19.443225  607669 system_pods.go:61] "kube-proxy-hzfcx" [f4ba208a-1a78-46ae-9684-ff3309400852] Running
	I1124 13:48:19.443231  607669 system_pods.go:61] "kube-scheduler-old-k8s-version-513442" [c400bc97-a209-437d-ba96-60c58a4b8878] Running
	I1124 13:48:19.443240  607669 system_pods.go:61] "storage-provisioner" [65efb270-100a-4e7c-bee8-24de1df28586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:19.443248  607669 system_pods.go:74] duration metric: took 4.608559ms to wait for pod list to return data ...
	I1124 13:48:19.443260  607669 default_sa.go:34] waiting for default service account to be created ...
	I1124 13:48:19.446125  607669 default_sa.go:45] found service account: "default"
	I1124 13:48:19.446157  607669 default_sa.go:55] duration metric: took 2.890045ms for default service account to be created ...
	I1124 13:48:19.446170  607669 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 13:48:19.450324  607669 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:19.450375  607669 system_pods.go:89] "coredns-5dd5756b68-b5rrl" [4e6c9b7c-5f0a-4c60-8197-20e985a07403] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:19.450385  607669 system_pods.go:89] "etcd-old-k8s-version-513442" [0b1a1913-a17b-4362-af66-49436a831759] Running
	I1124 13:48:19.450394  607669 system_pods.go:89] "kindnet-tpjvb" [c7df115a-8394-4f80-ac6c-5b1fc95337b5] Running
	I1124 13:48:19.450408  607669 system_pods.go:89] "kube-apiserver-old-k8s-version-513442" [722a96a1-58fb-4240-9c3b-4732b2fc0877] Running
	I1124 13:48:19.450415  607669 system_pods.go:89] "kube-controller-manager-old-k8s-version-513442" [df7953a7-c9cf-4854-b6bb-c43b0415e709] Running
	I1124 13:48:19.450425  607669 system_pods.go:89] "kube-proxy-hzfcx" [f4ba208a-1a78-46ae-9684-ff3309400852] Running
	I1124 13:48:19.450434  607669 system_pods.go:89] "kube-scheduler-old-k8s-version-513442" [c400bc97-a209-437d-ba96-60c58a4b8878] Running
	I1124 13:48:19.450449  607669 system_pods.go:89] "storage-provisioner" [65efb270-100a-4e7c-bee8-24de1df28586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:19.450484  607669 retry.go:31] will retry after 306.547577ms: missing components: kube-dns
	I1124 13:48:19.761785  607669 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:19.761821  607669 system_pods.go:89] "coredns-5dd5756b68-b5rrl" [4e6c9b7c-5f0a-4c60-8197-20e985a07403] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:19.761828  607669 system_pods.go:89] "etcd-old-k8s-version-513442" [0b1a1913-a17b-4362-af66-49436a831759] Running
	I1124 13:48:19.761835  607669 system_pods.go:89] "kindnet-tpjvb" [c7df115a-8394-4f80-ac6c-5b1fc95337b5] Running
	I1124 13:48:19.761839  607669 system_pods.go:89] "kube-apiserver-old-k8s-version-513442" [722a96a1-58fb-4240-9c3b-4732b2fc0877] Running
	I1124 13:48:19.761843  607669 system_pods.go:89] "kube-controller-manager-old-k8s-version-513442" [df7953a7-c9cf-4854-b6bb-c43b0415e709] Running
	I1124 13:48:19.761846  607669 system_pods.go:89] "kube-proxy-hzfcx" [f4ba208a-1a78-46ae-9684-ff3309400852] Running
	I1124 13:48:19.761850  607669 system_pods.go:89] "kube-scheduler-old-k8s-version-513442" [c400bc97-a209-437d-ba96-60c58a4b8878] Running
	I1124 13:48:19.761855  607669 system_pods.go:89] "storage-provisioner" [65efb270-100a-4e7c-bee8-24de1df28586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:19.761871  607669 retry.go:31] will retry after 263.639636ms: missing components: kube-dns
	I1124 13:48:20.030723  607669 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:20.030764  607669 system_pods.go:89] "coredns-5dd5756b68-b5rrl" [4e6c9b7c-5f0a-4c60-8197-20e985a07403] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:20.030773  607669 system_pods.go:89] "etcd-old-k8s-version-513442" [0b1a1913-a17b-4362-af66-49436a831759] Running
	I1124 13:48:20.030781  607669 system_pods.go:89] "kindnet-tpjvb" [c7df115a-8394-4f80-ac6c-5b1fc95337b5] Running
	I1124 13:48:20.030787  607669 system_pods.go:89] "kube-apiserver-old-k8s-version-513442" [722a96a1-58fb-4240-9c3b-4732b2fc0877] Running
	I1124 13:48:20.030794  607669 system_pods.go:89] "kube-controller-manager-old-k8s-version-513442" [df7953a7-c9cf-4854-b6bb-c43b0415e709] Running
	I1124 13:48:20.030799  607669 system_pods.go:89] "kube-proxy-hzfcx" [f4ba208a-1a78-46ae-9684-ff3309400852] Running
	I1124 13:48:20.030804  607669 system_pods.go:89] "kube-scheduler-old-k8s-version-513442" [c400bc97-a209-437d-ba96-60c58a4b8878] Running
	I1124 13:48:20.030812  607669 system_pods.go:89] "storage-provisioner" [65efb270-100a-4e7c-bee8-24de1df28586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:20.030836  607669 retry.go:31] will retry after 485.23875ms: missing components: kube-dns
	I1124 13:48:16.203971  572647 cri.go:89] found id: ""
	I1124 13:48:16.204004  572647 logs.go:282] 0 containers: []
	W1124 13:48:16.204016  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:16.204025  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:16.204087  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:16.233087  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:16.233113  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:16.233119  572647 cri.go:89] found id: ""
	I1124 13:48:16.233130  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:16.233184  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.237937  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:16.242366  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:16.242450  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:16.273007  572647 cri.go:89] found id: ""
	I1124 13:48:16.273034  572647 logs.go:282] 0 containers: []
	W1124 13:48:16.273043  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:16.273049  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:16.273100  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:16.302483  572647 cri.go:89] found id: ""
	I1124 13:48:16.302518  572647 logs.go:282] 0 containers: []
	W1124 13:48:16.302537  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:16.302553  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:16.302575  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:16.360777  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:16.360817  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:16.391672  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:16.391700  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:16.490704  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:16.490743  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:16.530411  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:16.530448  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:16.567070  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:16.567107  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:16.601689  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:16.601728  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:16.646105  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:16.646143  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:16.682522  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:16.682560  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:16.699850  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:16.699887  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:16.759811  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:16.759835  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:16.759853  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:16.795013  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:16.795048  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:19.334057  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:19.334568  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:19.334661  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:19.334733  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:19.365714  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:19.365735  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:19.365739  572647 cri.go:89] found id: ""
	I1124 13:48:19.365747  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:19.365800  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.370354  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.374856  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:19.374992  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:19.405492  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:19.405519  572647 cri.go:89] found id: ""
	I1124 13:48:19.405529  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:19.405589  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.411364  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:19.411426  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:19.443360  572647 cri.go:89] found id: ""
	I1124 13:48:19.443391  572647 logs.go:282] 0 containers: []
	W1124 13:48:19.443404  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:19.443412  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:19.443477  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:19.475298  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:19.475324  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:19.475331  572647 cri.go:89] found id: ""
	I1124 13:48:19.475341  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:19.475407  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.480369  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.484782  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:19.484863  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:19.514622  572647 cri.go:89] found id: ""
	I1124 13:48:19.514666  572647 logs.go:282] 0 containers: []
	W1124 13:48:19.514716  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:19.514726  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:19.514807  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:19.550847  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:19.550872  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:19.550877  572647 cri.go:89] found id: ""
	I1124 13:48:19.550886  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:19.550963  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.556478  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:19.561320  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:19.561401  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:19.596190  572647 cri.go:89] found id: ""
	I1124 13:48:19.596226  572647 logs.go:282] 0 containers: []
	W1124 13:48:19.596238  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:19.596247  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:19.596309  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:19.627382  572647 cri.go:89] found id: ""
	I1124 13:48:19.627413  572647 logs.go:282] 0 containers: []
	W1124 13:48:19.627424  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:19.627436  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:19.627452  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:19.694796  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:19.694836  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:19.752858  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:19.752896  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:19.788182  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:19.788224  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:19.879216  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:19.879255  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:19.940757  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:19.940776  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:19.940790  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:19.979681  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:19.979726  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:20.020042  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:20.020085  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:20.064463  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:20.064499  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:20.098012  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:20.098044  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:20.132122  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:20.132157  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:20.148958  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:20.148997  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:20.521094  607669 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:20.521123  607669 system_pods.go:89] "coredns-5dd5756b68-b5rrl" [4e6c9b7c-5f0a-4c60-8197-20e985a07403] Running
	I1124 13:48:20.521130  607669 system_pods.go:89] "etcd-old-k8s-version-513442" [0b1a1913-a17b-4362-af66-49436a831759] Running
	I1124 13:48:20.521133  607669 system_pods.go:89] "kindnet-tpjvb" [c7df115a-8394-4f80-ac6c-5b1fc95337b5] Running
	I1124 13:48:20.521137  607669 system_pods.go:89] "kube-apiserver-old-k8s-version-513442" [722a96a1-58fb-4240-9c3b-4732b2fc0877] Running
	I1124 13:48:20.521141  607669 system_pods.go:89] "kube-controller-manager-old-k8s-version-513442" [df7953a7-c9cf-4854-b6bb-c43b0415e709] Running
	I1124 13:48:20.521145  607669 system_pods.go:89] "kube-proxy-hzfcx" [f4ba208a-1a78-46ae-9684-ff3309400852] Running
	I1124 13:48:20.521148  607669 system_pods.go:89] "kube-scheduler-old-k8s-version-513442" [c400bc97-a209-437d-ba96-60c58a4b8878] Running
	I1124 13:48:20.521151  607669 system_pods.go:89] "storage-provisioner" [65efb270-100a-4e7c-bee8-24de1df28586] Running
	I1124 13:48:20.521159  607669 system_pods.go:126] duration metric: took 1.074982882s to wait for k8s-apps to be running ...
	I1124 13:48:20.521166  607669 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 13:48:20.521215  607669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:48:20.535666  607669 system_svc.go:56] duration metric: took 14.486184ms WaitForService to wait for kubelet
	I1124 13:48:20.535706  607669 kubeadm.go:587] duration metric: took 16.006375183s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:48:20.535732  607669 node_conditions.go:102] verifying NodePressure condition ...
	I1124 13:48:20.538619  607669 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 13:48:20.538646  607669 node_conditions.go:123] node cpu capacity is 8
	I1124 13:48:20.538662  607669 node_conditions.go:105] duration metric: took 2.9245ms to run NodePressure ...
	I1124 13:48:20.538676  607669 start.go:242] waiting for startup goroutines ...
	I1124 13:48:20.538683  607669 start.go:247] waiting for cluster config update ...
	I1124 13:48:20.538693  607669 start.go:256] writing updated cluster config ...
	I1124 13:48:20.539040  607669 ssh_runner.go:195] Run: rm -f paused
	I1124 13:48:20.543325  607669 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:48:20.547793  607669 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-b5rrl" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.552447  607669 pod_ready.go:94] pod "coredns-5dd5756b68-b5rrl" is "Ready"
	I1124 13:48:20.552472  607669 pod_ready.go:86] duration metric: took 4.651627ms for pod "coredns-5dd5756b68-b5rrl" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.556328  607669 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.561689  607669 pod_ready.go:94] pod "etcd-old-k8s-version-513442" is "Ready"
	I1124 13:48:20.561717  607669 pod_ready.go:86] duration metric: took 5.363766ms for pod "etcd-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.564634  607669 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.569265  607669 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-513442" is "Ready"
	I1124 13:48:20.569291  607669 pod_ready.go:86] duration metric: took 4.631558ms for pod "kube-apiserver-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.572304  607669 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:20.948397  607669 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-513442" is "Ready"
	I1124 13:48:20.948423  607669 pod_ready.go:86] duration metric: took 376.095956ms for pod "kube-controller-manager-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:21.148648  607669 pod_ready.go:83] waiting for pod "kube-proxy-hzfcx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:21.548255  607669 pod_ready.go:94] pod "kube-proxy-hzfcx" is "Ready"
	I1124 13:48:21.548288  607669 pod_ready.go:86] duration metric: took 399.608636ms for pod "kube-proxy-hzfcx" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:21.748744  607669 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:22.147789  607669 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-513442" is "Ready"
	I1124 13:48:22.147821  607669 pod_ready.go:86] duration metric: took 399.0528ms for pod "kube-scheduler-old-k8s-version-513442" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:22.147833  607669 pod_ready.go:40] duration metric: took 1.604464617s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:48:22.193883  607669 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 13:48:22.196207  607669 out.go:203] 
	W1124 13:48:22.197964  607669 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 13:48:22.199516  607669 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 13:48:22.201541  607669 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-513442" cluster and "default" namespace by default
	W1124 13:48:20.947014  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	W1124 13:48:22.948554  608917 node_ready.go:57] node "no-preload-608395" has "Ready":"False" status (will retry)
	I1124 13:48:24.446130  608917 node_ready.go:49] node "no-preload-608395" is "Ready"
	I1124 13:48:24.446168  608917 node_ready.go:38] duration metric: took 14.503611427s for node "no-preload-608395" to be "Ready" ...
	I1124 13:48:24.446195  608917 api_server.go:52] waiting for apiserver process to appear ...
	I1124 13:48:24.446254  608917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:48:24.460952  608917 api_server.go:72] duration metric: took 14.82264088s to wait for apiserver process to appear ...
	I1124 13:48:24.460990  608917 api_server.go:88] waiting for apiserver healthz status ...
	I1124 13:48:24.461021  608917 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 13:48:24.466768  608917 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1124 13:48:24.468117  608917 api_server.go:141] control plane version: v1.34.1
	I1124 13:48:24.468151  608917 api_server.go:131] duration metric: took 7.151862ms to wait for apiserver health ...
	I1124 13:48:24.468164  608917 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 13:48:24.473836  608917 system_pods.go:59] 8 kube-system pods found
	I1124 13:48:24.473891  608917 system_pods.go:61] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:24.473901  608917 system_pods.go:61] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:24.473965  608917 system_pods.go:61] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:24.473980  608917 system_pods.go:61] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:24.473987  608917 system_pods.go:61] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:24.473995  608917 system_pods.go:61] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:24.474001  608917 system_pods.go:61] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:24.474011  608917 system_pods.go:61] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:24.474025  608917 system_pods.go:74] duration metric: took 5.853076ms to wait for pod list to return data ...
	I1124 13:48:24.474037  608917 default_sa.go:34] waiting for default service account to be created ...
	I1124 13:48:24.476681  608917 default_sa.go:45] found service account: "default"
	I1124 13:48:24.476712  608917 default_sa.go:55] duration metric: took 2.661232ms for default service account to be created ...
	I1124 13:48:24.476724  608917 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 13:48:24.479715  608917 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:24.479757  608917 system_pods.go:89] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:24.479765  608917 system_pods.go:89] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:24.479776  608917 system_pods.go:89] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:24.479782  608917 system_pods.go:89] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:24.479788  608917 system_pods.go:89] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:24.479793  608917 system_pods.go:89] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:24.479798  608917 system_pods.go:89] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:24.479806  608917 system_pods.go:89] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:24.479831  608917 retry.go:31] will retry after 257.034103ms: missing components: kube-dns
	I1124 13:48:24.740811  608917 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:24.740842  608917 system_pods.go:89] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:24.740848  608917 system_pods.go:89] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:24.740854  608917 system_pods.go:89] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:24.740858  608917 system_pods.go:89] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:24.740863  608917 system_pods.go:89] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:24.740866  608917 system_pods.go:89] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:24.740869  608917 system_pods.go:89] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:24.740876  608917 system_pods.go:89] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:24.740892  608917 retry.go:31] will retry after 244.335921ms: missing components: kube-dns
	I1124 13:48:24.989021  608917 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:24.989054  608917 system_pods.go:89] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:24.989061  608917 system_pods.go:89] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:24.989067  608917 system_pods.go:89] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:24.989072  608917 system_pods.go:89] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:24.989077  608917 system_pods.go:89] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:24.989080  608917 system_pods.go:89] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:24.989084  608917 system_pods.go:89] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:24.989089  608917 system_pods.go:89] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:24.989104  608917 retry.go:31] will retry after 431.238044ms: missing components: kube-dns
	I1124 13:48:22.686011  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:22.686450  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:22.686506  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:22.686563  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:22.718842  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:22.718868  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:22.718874  572647 cri.go:89] found id: ""
	I1124 13:48:22.718885  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:22.719025  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.724051  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.728627  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:22.728697  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:22.758279  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:22.758305  572647 cri.go:89] found id: ""
	I1124 13:48:22.758315  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:22.758378  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.762905  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:22.763025  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:22.796176  572647 cri.go:89] found id: ""
	I1124 13:48:22.796207  572647 logs.go:282] 0 containers: []
	W1124 13:48:22.796218  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:22.796227  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:22.796293  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:22.828770  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:22.828801  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:22.828815  572647 cri.go:89] found id: ""
	I1124 13:48:22.828827  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:22.828886  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.833530  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.837668  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:22.837750  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:22.867760  572647 cri.go:89] found id: ""
	I1124 13:48:22.867793  572647 logs.go:282] 0 containers: []
	W1124 13:48:22.867806  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:22.867815  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:22.867976  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:22.899275  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:22.899305  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:22.899312  572647 cri.go:89] found id: ""
	I1124 13:48:22.899327  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:22.899391  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.903859  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:22.908121  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:22.908190  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:22.938883  572647 cri.go:89] found id: ""
	I1124 13:48:22.938961  572647 logs.go:282] 0 containers: []
	W1124 13:48:22.938972  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:22.938980  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:22.939033  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:22.969840  572647 cri.go:89] found id: ""
	I1124 13:48:22.969864  572647 logs.go:282] 0 containers: []
	W1124 13:48:22.969872  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:22.969882  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:22.969903  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:23.031386  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:23.031411  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:23.031425  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:23.067770  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:23.067805  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:23.104851  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:23.104886  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:23.160621  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:23.160668  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:23.190994  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:23.191026  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:23.226509  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:23.226542  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:23.269082  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:23.269130  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:23.360572  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:23.360613  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:23.399049  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:23.399089  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:23.440241  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:23.440282  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:23.474172  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:23.474212  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:25.992569  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:25.993167  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:25.993241  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:25.993310  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:26.021789  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:26.021816  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:26.021823  572647 cri.go:89] found id: ""
	I1124 13:48:26.021834  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:26.021985  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.027084  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.031267  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:26.031350  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:26.063349  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:26.063379  572647 cri.go:89] found id: ""
	I1124 13:48:26.063390  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:26.063448  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.068064  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:26.068140  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:26.096106  572647 cri.go:89] found id: ""
	I1124 13:48:26.096148  572647 logs.go:282] 0 containers: []
	W1124 13:48:26.096158  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:26.096165  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:26.096220  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:26.126156  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:26.126186  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:26.126193  572647 cri.go:89] found id: ""
	I1124 13:48:26.126205  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:26.126275  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.131369  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.135595  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:26.135657  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:26.163133  572647 cri.go:89] found id: ""
	I1124 13:48:26.163161  572647 logs.go:282] 0 containers: []
	W1124 13:48:26.163169  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:26.163187  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:26.163244  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:26.192355  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:26.192378  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:26.192384  572647 cri.go:89] found id: ""
	I1124 13:48:26.192394  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:26.192549  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:26.197316  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:25.424597  608917 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:25.424631  608917 system_pods.go:89] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:48:25.424636  608917 system_pods.go:89] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:25.424642  608917 system_pods.go:89] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:25.424646  608917 system_pods.go:89] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:25.424650  608917 system_pods.go:89] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:25.424653  608917 system_pods.go:89] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:25.424656  608917 system_pods.go:89] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:25.424663  608917 system_pods.go:89] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:48:25.424679  608917 retry.go:31] will retry after 458.014987ms: missing components: kube-dns
	I1124 13:48:25.886603  608917 system_pods.go:86] 8 kube-system pods found
	I1124 13:48:25.886633  608917 system_pods.go:89] "coredns-66bc5c9577-rcf8v" [a909252f-b923-46e8-acff-b0d0943c4a29] Running
	I1124 13:48:25.886641  608917 system_pods.go:89] "etcd-no-preload-608395" [b9426983-537c-4c4f-a8dd-3378b24f66f3] Running
	I1124 13:48:25.886644  608917 system_pods.go:89] "kindnet-zqlgn" [dc580d4e-c35b-4def-94d4-43697fee08ef] Running
	I1124 13:48:25.886649  608917 system_pods.go:89] "kube-apiserver-no-preload-608395" [00ece03a-94a4-4b04-8ee2-a6f539022a06] Running
	I1124 13:48:25.886653  608917 system_pods.go:89] "kube-controller-manager-no-preload-608395" [f4744606-354b-472e-a224-38df2dd201ca] Running
	I1124 13:48:25.886657  608917 system_pods.go:89] "kube-proxy-5vj5p" [2e67d44e-9eb4-4bb7-a087-a76def391cbb] Running
	I1124 13:48:25.886660  608917 system_pods.go:89] "kube-scheduler-no-preload-608395" [5bf4e205-28fb-4838-99bb-4fc91fe8642b] Running
	I1124 13:48:25.886663  608917 system_pods.go:89] "storage-provisioner" [c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa] Running
	I1124 13:48:25.886671  608917 system_pods.go:126] duration metric: took 1.409940532s to wait for k8s-apps to be running ...
	I1124 13:48:25.886680  608917 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 13:48:25.886726  608917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:48:25.901294  608917 system_svc.go:56] duration metric: took 14.604723ms WaitForService to wait for kubelet
	I1124 13:48:25.901324  608917 kubeadm.go:587] duration metric: took 16.26302303s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:48:25.901343  608917 node_conditions.go:102] verifying NodePressure condition ...
	I1124 13:48:25.904190  608917 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 13:48:25.904219  608917 node_conditions.go:123] node cpu capacity is 8
	I1124 13:48:25.904234  608917 node_conditions.go:105] duration metric: took 2.88688ms to run NodePressure ...
	I1124 13:48:25.904249  608917 start.go:242] waiting for startup goroutines ...
	I1124 13:48:25.904256  608917 start.go:247] waiting for cluster config update ...
	I1124 13:48:25.904266  608917 start.go:256] writing updated cluster config ...
	I1124 13:48:25.904560  608917 ssh_runner.go:195] Run: rm -f paused
	I1124 13:48:25.909215  608917 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:48:25.912986  608917 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rcf8v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.917301  608917 pod_ready.go:94] pod "coredns-66bc5c9577-rcf8v" is "Ready"
	I1124 13:48:25.917324  608917 pod_ready.go:86] duration metric: took 4.297309ms for pod "coredns-66bc5c9577-rcf8v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.919442  608917 pod_ready.go:83] waiting for pod "etcd-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.923976  608917 pod_ready.go:94] pod "etcd-no-preload-608395" is "Ready"
	I1124 13:48:25.923999  608917 pod_ready.go:86] duration metric: took 4.535115ms for pod "etcd-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.926003  608917 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.930385  608917 pod_ready.go:94] pod "kube-apiserver-no-preload-608395" is "Ready"
	I1124 13:48:25.930413  608917 pod_ready.go:86] duration metric: took 4.382406ms for pod "kube-apiserver-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:25.932261  608917 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:26.313581  608917 pod_ready.go:94] pod "kube-controller-manager-no-preload-608395" is "Ready"
	I1124 13:48:26.313615  608917 pod_ready.go:86] duration metric: took 381.333887ms for pod "kube-controller-manager-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:26.514064  608917 pod_ready.go:83] waiting for pod "kube-proxy-5vj5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:26.913664  608917 pod_ready.go:94] pod "kube-proxy-5vj5p" is "Ready"
	I1124 13:48:26.913702  608917 pod_ready.go:86] duration metric: took 399.60223ms for pod "kube-proxy-5vj5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:27.114488  608917 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:27.514056  608917 pod_ready.go:94] pod "kube-scheduler-no-preload-608395" is "Ready"
	I1124 13:48:27.514084  608917 pod_ready.go:86] duration metric: took 399.56934ms for pod "kube-scheduler-no-preload-608395" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:48:27.514098  608917 pod_ready.go:40] duration metric: took 1.604847792s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:48:27.561310  608917 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 13:48:27.563544  608917 out.go:179] * Done! kubectl is now configured to use "no-preload-608395" cluster and "default" namespace by default
	I1124 13:48:26.202352  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:26.202439  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:26.231899  572647 cri.go:89] found id: ""
	I1124 13:48:26.231953  572647 logs.go:282] 0 containers: []
	W1124 13:48:26.231964  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:26.231973  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:26.232040  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:26.263417  572647 cri.go:89] found id: ""
	I1124 13:48:26.263446  572647 logs.go:282] 0 containers: []
	W1124 13:48:26.263459  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:26.263473  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:26.263488  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:26.354230  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:26.354265  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:26.389608  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:26.389652  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:26.427040  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:26.427077  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:26.466568  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:26.466603  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:26.503710  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:26.503749  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:26.539150  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:26.539193  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:26.583782  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:26.583825  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:26.617656  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:26.617696  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:26.634777  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:26.634809  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:26.693534  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:26.693559  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:26.693577  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:26.748627  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:26.748668  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:29.280171  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:29.280640  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:29.280694  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:29.280748  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:29.309613  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:29.309638  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:29.309644  572647 cri.go:89] found id: ""
	I1124 13:48:29.309660  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:29.309730  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.314623  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.319864  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:29.319962  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:29.348671  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:29.348699  572647 cri.go:89] found id: ""
	I1124 13:48:29.348709  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:29.348775  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.353662  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:29.353728  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:29.383017  572647 cri.go:89] found id: ""
	I1124 13:48:29.383046  572647 logs.go:282] 0 containers: []
	W1124 13:48:29.383058  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:29.383066  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:29.383121  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:29.411238  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:29.411259  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:29.411264  572647 cri.go:89] found id: ""
	I1124 13:48:29.411271  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:29.411325  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.415976  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.420189  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:29.420264  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:29.449856  572647 cri.go:89] found id: ""
	I1124 13:48:29.449890  572647 logs.go:282] 0 containers: []
	W1124 13:48:29.449921  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:29.449929  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:29.450001  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:29.480136  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:29.480164  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:29.480171  572647 cri.go:89] found id: ""
	I1124 13:48:29.480181  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:29.480258  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.484998  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:29.489433  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:29.489504  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:29.519804  572647 cri.go:89] found id: ""
	I1124 13:48:29.519841  572647 logs.go:282] 0 containers: []
	W1124 13:48:29.519854  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:29.519864  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:29.520048  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:29.549935  572647 cri.go:89] found id: ""
	I1124 13:48:29.549964  572647 logs.go:282] 0 containers: []
	W1124 13:48:29.549974  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:29.549986  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:29.549997  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:29.593521  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:29.593560  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:29.681751  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:29.681792  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:29.699198  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:29.699232  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:29.759823  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:29.759850  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:29.759863  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:29.798497  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:29.798534  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:29.835677  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:29.835718  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:29.864876  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:29.864923  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:29.898153  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:29.898186  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:29.932035  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:29.932073  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:29.971224  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:29.971258  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:30.026576  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:30.026619  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:32.561313  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:32.561791  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:32.561844  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:32.561894  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:32.598025  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:32.598050  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:32.598056  572647 cri.go:89] found id: ""
	I1124 13:48:32.598068  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:32.598133  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:32.602725  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:32.607141  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:32.607216  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:32.640836  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:32.640865  572647 cri.go:89] found id: ""
	I1124 13:48:32.640875  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:32.640954  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:32.646056  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:32.646126  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:32.674729  572647 cri.go:89] found id: ""
	I1124 13:48:32.674762  572647 logs.go:282] 0 containers: []
	W1124 13:48:32.674774  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:32.674782  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:32.674838  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:32.704017  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:32.704038  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:32.704042  572647 cri.go:89] found id: ""
	I1124 13:48:32.704051  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:32.704116  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:32.708425  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:32.712411  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:32.712479  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:32.740588  572647 cri.go:89] found id: ""
	I1124 13:48:32.740618  572647 logs.go:282] 0 containers: []
	W1124 13:48:32.740630  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:32.740638  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:32.740694  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:32.771592  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:32.771619  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:32.771624  572647 cri.go:89] found id: ""
	I1124 13:48:32.771632  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:32.771695  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:32.776594  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:32.781774  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:32.781857  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:32.821617  572647 cri.go:89] found id: ""
	I1124 13:48:32.821644  572647 logs.go:282] 0 containers: []
	W1124 13:48:32.821654  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:32.821662  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:32.821727  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:48:32.853528  572647 cri.go:89] found id: ""
	I1124 13:48:32.853552  572647 logs.go:282] 0 containers: []
	W1124 13:48:32.853560  572647 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:48:32.853571  572647 logs.go:123] Gathering logs for kube-scheduler [9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2] ...
	I1124 13:48:32.853587  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:32.894116  572647 logs.go:123] Gathering logs for kube-controller-manager [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604] ...
	I1124 13:48:32.894152  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:32.928183  572647 logs.go:123] Gathering logs for container status ...
	I1124 13:48:32.928225  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:48:32.963902  572647 logs.go:123] Gathering logs for kubelet ...
	I1124 13:48:32.963954  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:48:33.080028  572647 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:48:33.080059  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:48:33.151516  572647 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:48:33.151543  572647 logs.go:123] Gathering logs for kube-apiserver [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3] ...
	I1124 13:48:33.151560  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:33.190611  572647 logs.go:123] Gathering logs for kube-apiserver [707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce] ...
	I1124 13:48:33.190648  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:33.230177  572647 logs.go:123] Gathering logs for kube-controller-manager [89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b] ...
	I1124 13:48:33.230211  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:33.264707  572647 logs.go:123] Gathering logs for containerd ...
	I1124 13:48:33.264740  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1124 13:48:33.313312  572647 logs.go:123] Gathering logs for dmesg ...
	I1124 13:48:33.313352  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:48:33.332374  572647 logs.go:123] Gathering logs for etcd [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72] ...
	I1124 13:48:33.332404  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:33.374521  572647 logs.go:123] Gathering logs for kube-scheduler [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd] ...
	I1124 13:48:33.374570  572647 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:35.931383  572647 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:48:35.932010  572647 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:48:35.932066  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:48:35.932129  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:48:35.963379  572647 cri.go:89] found id: "6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3"
	I1124 13:48:35.963406  572647 cri.go:89] found id: "707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce"
	I1124 13:48:35.963411  572647 cri.go:89] found id: ""
	I1124 13:48:35.963421  572647 logs.go:282] 2 containers: [6700c126fd327c2e159d0faade33f59514f89b0a53de7e75c697f3b9b2c2f3b3 707b1dc8c22b4ecacc52d048e892a6c42437f5e5e64949cdb7dc0b6ffad3a6ce]
	I1124 13:48:35.963545  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:35.968069  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:35.972536  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1124 13:48:35.972616  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:48:36.003944  572647 cri.go:89] found id: "856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72"
	I1124 13:48:36.003968  572647 cri.go:89] found id: ""
	I1124 13:48:36.003977  572647 logs.go:282] 1 containers: [856aed50c704fa89134428f4365f3461d5d97f7b0f6e82094b1cba4928ec0c72]
	I1124 13:48:36.004038  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:36.009309  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1124 13:48:36.009386  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:48:36.041126  572647 cri.go:89] found id: ""
	I1124 13:48:36.041174  572647 logs.go:282] 0 containers: []
	W1124 13:48:36.041185  572647 logs.go:284] No container was found matching "coredns"
	I1124 13:48:36.041193  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:48:36.041318  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:48:36.072529  572647 cri.go:89] found id: "8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd"
	I1124 13:48:36.072546  572647 cri.go:89] found id: "9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2"
	I1124 13:48:36.072550  572647 cri.go:89] found id: ""
	I1124 13:48:36.072558  572647 logs.go:282] 2 containers: [8249c9dabc6b89efb0dd079b97d069b96667a655efa778d1d719e24d6ec100fd 9339d42ee555f7da1cb5ae94cf3bc22b2f3744f2ca5e2dfd459c4212a28774e2]
	I1124 13:48:36.072610  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:36.077016  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:36.081328  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:48:36.081405  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:48:36.113279  572647 cri.go:89] found id: ""
	I1124 13:48:36.113310  572647 logs.go:282] 0 containers: []
	W1124 13:48:36.113322  572647 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:48:36.113330  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:48:36.113390  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:48:36.146515  572647 cri.go:89] found id: "a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604"
	I1124 13:48:36.146542  572647 cri.go:89] found id: "89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b"
	I1124 13:48:36.146546  572647 cri.go:89] found id: ""
	I1124 13:48:36.146554  572647 logs.go:282] 2 containers: [a8454ffcf0213ebcba100cbad1da47ec4105f1be4ce6ed2911d3997ae6994604 89dffe66574edd9221074d8edcc51ee3d2cf2497cf9a3bd0e007560447aaa97b]
	I1124 13:48:36.146614  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:36.151049  572647 ssh_runner.go:195] Run: which crictl
	I1124 13:48:36.155578  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1124 13:48:36.155658  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:48:36.186139  572647 cri.go:89] found id: ""
	I1124 13:48:36.186164  572647 logs.go:282] 0 containers: []
	W1124 13:48:36.186175  572647 logs.go:284] No container was found matching "kindnet"
	I1124 13:48:36.186192  572647 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:48:36.186260  572647 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a87ce53f9a53a       56cc512116c8f       9 seconds ago       Running             busybox                   0                   abf634e42c234       busybox                                     default
	bf18342d6713e       52546a367cc9e       15 seconds ago      Running             coredns                   0                   6d8fde1010af0       coredns-66bc5c9577-rcf8v                    kube-system
	8507f470f3a86       6e38f40d628db       15 seconds ago      Running             storage-provisioner       0                   38a319be0b79a       storage-provisioner                         kube-system
	2ea97fe407516       409467f978b4a       26 seconds ago      Running             kindnet-cni               0                   85152a5b82a56       kindnet-zqlgn                               kube-system
	9ddb50f35d3b7       fc25172553d79       30 seconds ago      Running             kube-proxy                0                   91198ed5eb4e3       kube-proxy-5vj5p                            kube-system
	f1e57ae5fc13d       7dd6aaa1717ab       40 seconds ago      Running             kube-scheduler            0                   85dfcbe134545       kube-scheduler-no-preload-608395            kube-system
	e0125ce665aa9       c80c8dbafe7dd       40 seconds ago      Running             kube-controller-manager   0                   f701193b00cde       kube-controller-manager-no-preload-608395   kube-system
	d82cad123b411       c3994bc696102       40 seconds ago      Running             kube-apiserver            0                   0000dcbeea4e5       kube-apiserver-no-preload-608395            kube-system
	dc4089699d63b       5f1f5298c888d       41 seconds ago      Running             etcd                      0                   b817a80ccfbeb       etcd-no-preload-608395                      kube-system
	
	
	==> containerd <==
	Nov 24 13:48:24 no-preload-608395 containerd[663]: time="2025-11-24T13:48:24.510060828Z" level=info msg="CreateContainer within sandbox \"38a319be0b79ad5175957c7dc1e582e7edb89c9e37f58b06f9f0994f04874bc8\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"8507f470f3a86296f0a16f1905dfcfc9305722b3ab50ce2e78a13d8f8acddb22\""
	Nov 24 13:48:24 no-preload-608395 containerd[663]: time="2025-11-24T13:48:24.510624936Z" level=info msg="StartContainer for \"8507f470f3a86296f0a16f1905dfcfc9305722b3ab50ce2e78a13d8f8acddb22\""
	Nov 24 13:48:24 no-preload-608395 containerd[663]: time="2025-11-24T13:48:24.511676866Z" level=info msg="connecting to shim 8507f470f3a86296f0a16f1905dfcfc9305722b3ab50ce2e78a13d8f8acddb22" address="unix:///run/containerd/s/143ca10fd90c5cb4c30fdb00eed55a198510d11174be676001637e238c916be7" protocol=ttrpc version=3
	Nov 24 13:48:24 no-preload-608395 containerd[663]: time="2025-11-24T13:48:24.517822696Z" level=info msg="Container bf18342d6713eac5d830a361ceb568e559a479f96c8273418cde044492ec70a3: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 13:48:24 no-preload-608395 containerd[663]: time="2025-11-24T13:48:24.527617577Z" level=info msg="CreateContainer within sandbox \"6d8fde1010af0dbd838e4fd22a1362c81137d2db72e7d0d908443a54202b5c9a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bf18342d6713eac5d830a361ceb568e559a479f96c8273418cde044492ec70a3\""
	Nov 24 13:48:24 no-preload-608395 containerd[663]: time="2025-11-24T13:48:24.528169702Z" level=info msg="StartContainer for \"bf18342d6713eac5d830a361ceb568e559a479f96c8273418cde044492ec70a3\""
	Nov 24 13:48:24 no-preload-608395 containerd[663]: time="2025-11-24T13:48:24.529084275Z" level=info msg="connecting to shim bf18342d6713eac5d830a361ceb568e559a479f96c8273418cde044492ec70a3" address="unix:///run/containerd/s/f028d04a185d6c9abe51092264b3e9e3162f4ccb61a33ad1b0cea00c1641b6e7" protocol=ttrpc version=3
	Nov 24 13:48:24 no-preload-608395 containerd[663]: time="2025-11-24T13:48:24.577131132Z" level=info msg="StartContainer for \"8507f470f3a86296f0a16f1905dfcfc9305722b3ab50ce2e78a13d8f8acddb22\" returns successfully"
	Nov 24 13:48:24 no-preload-608395 containerd[663]: time="2025-11-24T13:48:24.580306824Z" level=info msg="StartContainer for \"bf18342d6713eac5d830a361ceb568e559a479f96c8273418cde044492ec70a3\" returns successfully"
	Nov 24 13:48:28 no-preload-608395 containerd[663]: time="2025-11-24T13:48:28.016567907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:e09b20ec-b541-4478-9c67-c55b56ae8991,Namespace:default,Attempt:0,}"
	Nov 24 13:48:28 no-preload-608395 containerd[663]: time="2025-11-24T13:48:28.064047464Z" level=info msg="connecting to shim abf634e42c2348d9a3ac22d10e9756399d18ae0c0881e113e0b4034d8a76cb69" address="unix:///run/containerd/s/14d2a9716e984eb84752432f4df0d00c8f88a0426d6c135abeced2b7e10bbbaa" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 13:48:28 no-preload-608395 containerd[663]: time="2025-11-24T13:48:28.143114394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:e09b20ec-b541-4478-9c67-c55b56ae8991,Namespace:default,Attempt:0,} returns sandbox id \"abf634e42c2348d9a3ac22d10e9756399d18ae0c0881e113e0b4034d8a76cb69\""
	Nov 24 13:48:28 no-preload-608395 containerd[663]: time="2025-11-24T13:48:28.145079667Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.302216067Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.303276199Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396646"
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.304933339Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.307230621Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.307725020Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.162597076s"
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.307769131Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.312074703Z" level=info msg="CreateContainer within sandbox \"abf634e42c2348d9a3ac22d10e9756399d18ae0c0881e113e0b4034d8a76cb69\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.321536374Z" level=info msg="Container a87ce53f9a53a7e121b33fc1ab6bcf6a0671080a167fc5db54f42daa27b3b54e: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.328173630Z" level=info msg="CreateContainer within sandbox \"abf634e42c2348d9a3ac22d10e9756399d18ae0c0881e113e0b4034d8a76cb69\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"a87ce53f9a53a7e121b33fc1ab6bcf6a0671080a167fc5db54f42daa27b3b54e\""
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.329029778Z" level=info msg="StartContainer for \"a87ce53f9a53a7e121b33fc1ab6bcf6a0671080a167fc5db54f42daa27b3b54e\""
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.329901108Z" level=info msg="connecting to shim a87ce53f9a53a7e121b33fc1ab6bcf6a0671080a167fc5db54f42daa27b3b54e" address="unix:///run/containerd/s/14d2a9716e984eb84752432f4df0d00c8f88a0426d6c135abeced2b7e10bbbaa" protocol=ttrpc version=3
	Nov 24 13:48:30 no-preload-608395 containerd[663]: time="2025-11-24T13:48:30.393866048Z" level=info msg="StartContainer for \"a87ce53f9a53a7e121b33fc1ab6bcf6a0671080a167fc5db54f42daa27b3b54e\" returns successfully"
	
	
	==> coredns [bf18342d6713eac5d830a361ceb568e559a479f96c8273418cde044492ec70a3] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33889 - 60274 "HINFO IN 308682473451809031.9053382920724870437. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.018878381s
	
	
	==> describe nodes <==
	Name:               no-preload-608395
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-608395
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=no-preload-608395
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_48_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:48:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-608395
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:48:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:48:35 +0000   Mon, 24 Nov 2025 13:48:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:48:35 +0000   Mon, 24 Nov 2025 13:48:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:48:35 +0000   Mon, 24 Nov 2025 13:48:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:48:35 +0000   Mon, 24 Nov 2025 13:48:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-608395
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                320731f7-0f66-4c7b-bb73-4a2704cad18d
	  Boot ID:                    715d4626-373f-499b-b5de-b6d832ce4fe4
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-rcf8v                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     31s
	  kube-system                 etcd-no-preload-608395                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-zqlgn                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-no-preload-608395             250m (3%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-no-preload-608395    200m (2%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-5vj5p                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-no-preload-608395             100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 29s   kube-proxy       
	  Normal  Starting                 36s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  36s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  36s   kubelet          Node no-preload-608395 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s   kubelet          Node no-preload-608395 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s   kubelet          Node no-preload-608395 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s   node-controller  Node no-preload-608395 event: Registered Node no-preload-608395 in Controller
	  Normal  NodeReady                16s   kubelet          Node no-preload-608395 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 91 30 bc 58 af 08 06
	[Nov24 12:45] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a fb 84 7d 9e 9e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0a 91 30 bc 58 af 08 06
	[ +25.292047] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff da 14 b4 9b 3e 8f 08 06
	[  +0.024207] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 06 8e 71 0b 76 c3 08 06
	[ +16.768103] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 45 b6 ad fe 93 08 06
	[  +5.950770] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e b5 4a 70 0a 35 08 06
	[Nov24 12:46] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e 8b d0 4a da 7e 08 06
	[  +0.000557] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e b5 4a 70 0a 35 08 06
	[  +1.903671] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 1f e8 fc 59 74 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da 14 b4 9b 3e 8f 08 06
	[ +17.535584] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 31 ec 7c 1d 38 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 45 b6 ad fe 93 08 06
	
	
	==> etcd [dc4089699d63b1ebefa2ca4daebfcf11cd7227a50a1e6e1b2289c4b80616887b] <==
	{"level":"warn","ts":"2025-11-24T13:48:00.880768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.888650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.897590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.907577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.914266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.921173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.934065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.940316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.953688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.960197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.967051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.974729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:00.988889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:01.013686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:01.021343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:01.028137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:48:01.079446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37244","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T13:48:02.624508Z","caller":"traceutil/trace.go:172","msg":"trace[1781641608] linearizableReadLoop","detail":"{readStateIndex:72; appliedIndex:72; }","duration":"110.051169ms","start":"2025-11-24T13:48:02.514411Z","end":"2025-11-24T13:48:02.624462Z","steps":["trace[1781641608] 'read index received'  (duration: 110.044712ms)","trace[1781641608] 'applied index is now lower than readState.Index'  (duration: 5.647µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T13:48:02.673029Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.028533ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-24T13:48:02.673105Z","caller":"traceutil/trace.go:172","msg":"trace[707023611] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:69; }","duration":"137.102091ms","start":"2025-11-24T13:48:02.535985Z","end":"2025-11-24T13:48:02.673087Z","steps":["trace[707023611] 'agreement among raft nodes before linearized reading'  (duration: 137.004371ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T13:48:02.673142Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.494312ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-24T13:48:02.673176Z","caller":"traceutil/trace.go:172","msg":"trace[152867856] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:69; }","duration":"129.535239ms","start":"2025-11-24T13:48:02.543628Z","end":"2025-11-24T13:48:02.673163Z","steps":["trace[152867856] 'agreement among raft nodes before linearized reading'  (duration: 129.454766ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T13:48:02.672887Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.459223ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-admin\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-24T13:48:02.673288Z","caller":"traceutil/trace.go:172","msg":"trace[1228496030] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-admin; range_end:; response_count:0; response_revision:68; }","duration":"158.883515ms","start":"2025-11-24T13:48:02.514391Z","end":"2025-11-24T13:48:02.673274Z","steps":["trace[1228496030] 'agreement among raft nodes before linearized reading'  (duration: 110.197209ms)","trace[1228496030] 'range keys from in-memory index tree'  (duration: 48.211849ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T13:48:02.673056Z","caller":"traceutil/trace.go:172","msg":"trace[717417019] transaction","detail":"{read_only:false; response_revision:69; number_of_response:1; }","duration":"159.871089ms","start":"2025-11-24T13:48:02.513138Z","end":"2025-11-24T13:48:02.673009Z","steps":["trace[717417019] 'process raft request'  (duration: 111.381018ms)","trace[717417019] 'compare'  (duration: 48.311068ms)"],"step_count":2}
	
	
	==> kernel <==
	 13:48:40 up  2:30,  0 user,  load average: 1.95, 2.77, 1.91
	Linux no-preload-608395 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2ea97fe407516fa684fa4c2e7ad02af95ea220afac279014e4b4e3fe4dff2140] <==
	I1124 13:48:13.811405       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 13:48:13.811705       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1124 13:48:13.811879       1 main.go:148] setting mtu 1500 for CNI 
	I1124 13:48:13.811899       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 13:48:13.811974       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T13:48:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 13:48:14.016296       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 13:48:14.108095       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 13:48:14.207539       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 13:48:14.207904       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 13:48:14.608274       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 13:48:14.608309       1 metrics.go:72] Registering metrics
	I1124 13:48:14.608385       1 controller.go:711] "Syncing nftables rules"
	I1124 13:48:24.023180       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 13:48:24.023253       1 main.go:301] handling current node
	I1124 13:48:34.017224       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 13:48:34.017265       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d82cad123b4115bcd48ca1660a95b3679527efeba0bced6899fbfd61163285fe] <==
	I1124 13:48:01.533803       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 13:48:01.534806       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 13:48:01.539654       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 13:48:01.548173       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 13:48:01.548340       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 13:48:01.561341       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:48:01.562153       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 13:48:02.489855       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 13:48:02.674429       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 13:48:02.674534       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 13:48:03.220273       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 13:48:03.262189       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 13:48:03.341712       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 13:48:03.348882       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1124 13:48:03.350044       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 13:48:03.354460       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 13:48:03.475714       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 13:48:04.567992       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 13:48:04.589259       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 13:48:04.601283       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 13:48:09.228819       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 13:48:09.278836       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 13:48:09.430563       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:48:09.435571       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1124 13:48:36.837064       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:55596: use of closed network connection
	
	
	==> kube-controller-manager [e0125ce665aa93a74314d6f23ea2fab5491134c5aacd08baba2eb4d66c850e3c] <==
	I1124 13:48:08.442655       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 13:48:08.449990       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 13:48:08.457410       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 13:48:08.473049       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 13:48:08.473106       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:48:08.473123       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 13:48:08.473131       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 13:48:08.473696       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 13:48:08.474102       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 13:48:08.474184       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 13:48:08.474294       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-608395"
	I1124 13:48:08.474342       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 13:48:08.474589       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 13:48:08.475058       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 13:48:08.475159       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 13:48:08.475215       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 13:48:08.475226       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 13:48:08.475443       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 13:48:08.475540       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 13:48:08.475941       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 13:48:08.475969       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 13:48:08.475996       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 13:48:08.481156       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:48:08.504046       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:48:28.478220       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9ddb50f35d3b70a8df49aa4b5877775ec4126034cc94e6932e87b579184a5c1e] <==
	I1124 13:48:10.412732       1 server_linux.go:53] "Using iptables proxy"
	I1124 13:48:10.487102       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 13:48:10.588152       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 13:48:10.588196       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1124 13:48:10.588320       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 13:48:10.611310       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:48:10.611377       1 server_linux.go:132] "Using iptables Proxier"
	I1124 13:48:10.617651       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 13:48:10.618063       1 server.go:527] "Version info" version="v1.34.1"
	I1124 13:48:10.618091       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:48:10.619529       1 config.go:200] "Starting service config controller"
	I1124 13:48:10.619571       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 13:48:10.619634       1 config.go:309] "Starting node config controller"
	I1124 13:48:10.619944       1 config.go:106] "Starting endpoint slice config controller"
	I1124 13:48:10.620046       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 13:48:10.620078       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 13:48:10.619618       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 13:48:10.620120       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 13:48:10.719772       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 13:48:10.720304       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 13:48:10.720333       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 13:48:10.720355       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [f1e57ae5fc13de600be37e1d97249746f65ecb876d4354e85073ed623a64ef5c] <==
	E1124 13:48:01.491129       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:48:01.491188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:48:01.491203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 13:48:01.491258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:48:01.491260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 13:48:01.491367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:48:02.309331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:48:02.355280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 13:48:02.452183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:48:02.607841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 13:48:02.628272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 13:48:02.679178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:48:02.679824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 13:48:02.713000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:48:02.745011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:48:02.807930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 13:48:02.855374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 13:48:02.901084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:48:02.908158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 13:48:02.953400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:48:02.976892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 13:48:03.018088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:48:03.027582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 13:48:03.033893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1124 13:48:04.884430       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: I1124 13:48:09.315253    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55zsb\" (UniqueName: \"kubernetes.io/projected/2e67d44e-9eb4-4bb7-a087-a76def391cbb-kube-api-access-55zsb\") pod \"kube-proxy-5vj5p\" (UID: \"2e67d44e-9eb4-4bb7-a087-a76def391cbb\") " pod="kube-system/kube-proxy-5vj5p"
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: I1124 13:48:09.315312    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc580d4e-c35b-4def-94d4-43697fee08ef-xtables-lock\") pod \"kindnet-zqlgn\" (UID: \"dc580d4e-c35b-4def-94d4-43697fee08ef\") " pod="kube-system/kindnet-zqlgn"
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: I1124 13:48:09.315333    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc580d4e-c35b-4def-94d4-43697fee08ef-lib-modules\") pod \"kindnet-zqlgn\" (UID: \"dc580d4e-c35b-4def-94d4-43697fee08ef\") " pod="kube-system/kindnet-zqlgn"
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: I1124 13:48:09.315358    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfcz6\" (UniqueName: \"kubernetes.io/projected/dc580d4e-c35b-4def-94d4-43697fee08ef-kube-api-access-jfcz6\") pod \"kindnet-zqlgn\" (UID: \"dc580d4e-c35b-4def-94d4-43697fee08ef\") " pod="kube-system/kindnet-zqlgn"
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: I1124 13:48:09.315383    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dc580d4e-c35b-4def-94d4-43697fee08ef-cni-cfg\") pod \"kindnet-zqlgn\" (UID: \"dc580d4e-c35b-4def-94d4-43697fee08ef\") " pod="kube-system/kindnet-zqlgn"
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: I1124 13:48:09.315404    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e67d44e-9eb4-4bb7-a087-a76def391cbb-lib-modules\") pod \"kube-proxy-5vj5p\" (UID: \"2e67d44e-9eb4-4bb7-a087-a76def391cbb\") " pod="kube-system/kube-proxy-5vj5p"
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: I1124 13:48:09.315461    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e67d44e-9eb4-4bb7-a087-a76def391cbb-kube-proxy\") pod \"kube-proxy-5vj5p\" (UID: \"2e67d44e-9eb4-4bb7-a087-a76def391cbb\") " pod="kube-system/kube-proxy-5vj5p"
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: I1124 13:48:09.315515    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e67d44e-9eb4-4bb7-a087-a76def391cbb-xtables-lock\") pod \"kube-proxy-5vj5p\" (UID: \"2e67d44e-9eb4-4bb7-a087-a76def391cbb\") " pod="kube-system/kube-proxy-5vj5p"
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: E1124 13:48:09.423403    2128 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: E1124 13:48:09.423447    2128 projected.go:196] Error preparing data for projected volume kube-api-access-jfcz6 for pod kube-system/kindnet-zqlgn: configmap "kube-root-ca.crt" not found
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: E1124 13:48:09.423403    2128 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: E1124 13:48:09.423530    2128 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dc580d4e-c35b-4def-94d4-43697fee08ef-kube-api-access-jfcz6 podName:dc580d4e-c35b-4def-94d4-43697fee08ef nodeName:}" failed. No retries permitted until 2025-11-24 13:48:09.923496635 +0000 UTC m=+5.608589954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jfcz6" (UniqueName: "kubernetes.io/projected/dc580d4e-c35b-4def-94d4-43697fee08ef-kube-api-access-jfcz6") pod "kindnet-zqlgn" (UID: "dc580d4e-c35b-4def-94d4-43697fee08ef") : configmap "kube-root-ca.crt" not found
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: E1124 13:48:09.423539    2128 projected.go:196] Error preparing data for projected volume kube-api-access-55zsb for pod kube-system/kube-proxy-5vj5p: configmap "kube-root-ca.crt" not found
	Nov 24 13:48:09 no-preload-608395 kubelet[2128]: E1124 13:48:09.423599    2128 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2e67d44e-9eb4-4bb7-a087-a76def391cbb-kube-api-access-55zsb podName:2e67d44e-9eb4-4bb7-a087-a76def391cbb nodeName:}" failed. No retries permitted until 2025-11-24 13:48:09.923579676 +0000 UTC m=+5.608672986 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-55zsb" (UniqueName: "kubernetes.io/projected/2e67d44e-9eb4-4bb7-a087-a76def391cbb-kube-api-access-55zsb") pod "kube-proxy-5vj5p" (UID: "2e67d44e-9eb4-4bb7-a087-a76def391cbb") : configmap "kube-root-ca.crt" not found
	Nov 24 13:48:10 no-preload-608395 kubelet[2128]: I1124 13:48:10.458684    2128 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5vj5p" podStartSLOduration=1.458660564 podStartE2EDuration="1.458660564s" podCreationTimestamp="2025-11-24 13:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:48:10.45866162 +0000 UTC m=+6.143754938" watchObservedRunningTime="2025-11-24 13:48:10.458660564 +0000 UTC m=+6.143753882"
	Nov 24 13:48:14 no-preload-608395 kubelet[2128]: I1124 13:48:14.470969    2128 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-zqlgn" podStartSLOduration=2.500270355 podStartE2EDuration="5.470902852s" podCreationTimestamp="2025-11-24 13:48:09 +0000 UTC" firstStartedPulling="2025-11-24 13:48:10.528340574 +0000 UTC m=+6.213433877" lastFinishedPulling="2025-11-24 13:48:13.498973073 +0000 UTC m=+9.184066374" observedRunningTime="2025-11-24 13:48:14.4593351 +0000 UTC m=+10.144428418" watchObservedRunningTime="2025-11-24 13:48:14.470902852 +0000 UTC m=+10.155996169"
	Nov 24 13:48:24 no-preload-608395 kubelet[2128]: I1124 13:48:24.041807    2128 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 13:48:24 no-preload-608395 kubelet[2128]: I1124 13:48:24.107895    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb7xn\" (UniqueName: \"kubernetes.io/projected/c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa-kube-api-access-rb7xn\") pod \"storage-provisioner\" (UID: \"c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa\") " pod="kube-system/storage-provisioner"
	Nov 24 13:48:24 no-preload-608395 kubelet[2128]: I1124 13:48:24.107983    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a909252f-b923-46e8-acff-b0d0943c4a29-config-volume\") pod \"coredns-66bc5c9577-rcf8v\" (UID: \"a909252f-b923-46e8-acff-b0d0943c4a29\") " pod="kube-system/coredns-66bc5c9577-rcf8v"
	Nov 24 13:48:24 no-preload-608395 kubelet[2128]: I1124 13:48:24.108001    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqnm6\" (UniqueName: \"kubernetes.io/projected/a909252f-b923-46e8-acff-b0d0943c4a29-kube-api-access-qqnm6\") pod \"coredns-66bc5c9577-rcf8v\" (UID: \"a909252f-b923-46e8-acff-b0d0943c4a29\") " pod="kube-system/coredns-66bc5c9577-rcf8v"
	Nov 24 13:48:24 no-preload-608395 kubelet[2128]: I1124 13:48:24.108026    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa-tmp\") pod \"storage-provisioner\" (UID: \"c3c5ce52-cc27-4ccb-8bfb-e8f60c0c8faa\") " pod="kube-system/storage-provisioner"
	Nov 24 13:48:25 no-preload-608395 kubelet[2128]: I1124 13:48:25.487014    2128 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rcf8v" podStartSLOduration=16.48687978 podStartE2EDuration="16.48687978s" podCreationTimestamp="2025-11-24 13:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:48:25.486827527 +0000 UTC m=+21.171920848" watchObservedRunningTime="2025-11-24 13:48:25.48687978 +0000 UTC m=+21.171973101"
	Nov 24 13:48:27 no-preload-608395 kubelet[2128]: I1124 13:48:27.701742    2128 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=17.701716111 podStartE2EDuration="17.701716111s" podCreationTimestamp="2025-11-24 13:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:48:25.512975581 +0000 UTC m=+21.198068913" watchObservedRunningTime="2025-11-24 13:48:27.701716111 +0000 UTC m=+23.386809429"
	Nov 24 13:48:27 no-preload-608395 kubelet[2128]: I1124 13:48:27.731241    2128 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8xzb\" (UniqueName: \"kubernetes.io/projected/e09b20ec-b541-4478-9c67-c55b56ae8991-kube-api-access-p8xzb\") pod \"busybox\" (UID: \"e09b20ec-b541-4478-9c67-c55b56ae8991\") " pod="default/busybox"
	Nov 24 13:48:30 no-preload-608395 kubelet[2128]: I1124 13:48:30.499489    2128 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.335491178 podStartE2EDuration="3.499466503s" podCreationTimestamp="2025-11-24 13:48:27 +0000 UTC" firstStartedPulling="2025-11-24 13:48:28.144692632 +0000 UTC m=+23.829785929" lastFinishedPulling="2025-11-24 13:48:30.308667942 +0000 UTC m=+25.993761254" observedRunningTime="2025-11-24 13:48:30.49935399 +0000 UTC m=+26.184447308" watchObservedRunningTime="2025-11-24 13:48:30.499466503 +0000 UTC m=+26.184559821"
	
	
	==> storage-provisioner [8507f470f3a86296f0a16f1905dfcfc9305722b3ab50ce2e78a13d8f8acddb22] <==
	I1124 13:48:24.597855       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 13:48:24.600788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:24.606113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 13:48:24.606397       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 13:48:24.606646       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-608395_58da3de6-110c-42ba-ae46-08bea4778988!
	I1124 13:48:24.606790       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1725d06e-f0b5-414f-b855-627c3860c519", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-608395_58da3de6-110c-42ba-ae46-08bea4778988 became leader
	W1124 13:48:24.608881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:24.613215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 13:48:24.706978       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-608395_58da3de6-110c-42ba-ae46-08bea4778988!
	W1124 13:48:26.617331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:26.623469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:28.627192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:28.631977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:30.635249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:30.640668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:32.643448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:32.647985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:34.651906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:34.657673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:36.661156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:36.666197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:38.670153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:38.676270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:40.679613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:48:40.684287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-608395 -n no-preload-608395
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-608395 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (13.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (14.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-971503 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3f97cea3-5c5e-42af-99d5-9f7a1a3f7dcc] Pending
helpers_test.go:352: "busybox" [3f97cea3-5c5e-42af-99d5-9f7a1a3f7dcc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3f97cea3-5c5e-42af-99d5-9f7a1a3f7dcc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.006081746s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-971503 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-971503
helpers_test.go:243: (dbg) docker inspect embed-certs-971503:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1974ba44039b4457f1634f00ecd3b03b26eab33684498fe62152504698f0baf2",
	        "Created": "2025-11-24T13:49:58.810032472Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 637504,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:49:58.858286336Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/1974ba44039b4457f1634f00ecd3b03b26eab33684498fe62152504698f0baf2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1974ba44039b4457f1634f00ecd3b03b26eab33684498fe62152504698f0baf2/hostname",
	        "HostsPath": "/var/lib/docker/containers/1974ba44039b4457f1634f00ecd3b03b26eab33684498fe62152504698f0baf2/hosts",
	        "LogPath": "/var/lib/docker/containers/1974ba44039b4457f1634f00ecd3b03b26eab33684498fe62152504698f0baf2/1974ba44039b4457f1634f00ecd3b03b26eab33684498fe62152504698f0baf2-json.log",
	        "Name": "/embed-certs-971503",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-971503:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-971503",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1974ba44039b4457f1634f00ecd3b03b26eab33684498fe62152504698f0baf2",
	                "LowerDir": "/var/lib/docker/overlay2/3fccab807900f71d48edb071c3cc12aa6ab08c6868d12372bae8553c81a35f4a-init/diff:/var/lib/docker/overlay2/0f013e03fd0eaee4efc608fb0376e7d3e8ba628388f5191310c2259ab273ad26/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3fccab807900f71d48edb071c3cc12aa6ab08c6868d12372bae8553c81a35f4a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3fccab807900f71d48edb071c3cc12aa6ab08c6868d12372bae8553c81a35f4a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3fccab807900f71d48edb071c3cc12aa6ab08c6868d12372bae8553c81a35f4a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-971503",
	                "Source": "/var/lib/docker/volumes/embed-certs-971503/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-971503",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-971503",
	                "name.minikube.sigs.k8s.io": "embed-certs-971503",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "46c1be6c4a896de5cb42b792366beae66ff9d81d76d5061910d35fe4f58e9211",
	            "SandboxKey": "/var/run/docker/netns/46c1be6c4a89",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-971503": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "126edd368031c93a1e03fe1e5d53e7ff92fac3cd9bbf73b49a1b9d47979d9142",
	                    "EndpointID": "7dc19034c85b8f8b0057144eea75257b752635826d4c117a986ef3d3445b1853",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "3e:c6:3c:93:78:9a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-971503",
	                        "1974ba44039b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-971503 -n embed-certs-971503
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-971503 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-971503 logs -n 25: (1.565980483s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p no-preload-608395 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-608395            │ jenkins │ v1.37.0 │ 24 Nov 25 13:48 UTC │ 24 Nov 25 13:49 UTC │
	│ image   │ old-k8s-version-513442 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-513442       │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ pause   │ -p old-k8s-version-513442 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-513442       │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ unpause │ -p old-k8s-version-513442 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-513442       │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ delete  │ -p old-k8s-version-513442                                                                                                                                                                                                                           │ old-k8s-version-513442       │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ delete  │ -p old-k8s-version-513442                                                                                                                                                                                                                           │ old-k8s-version-513442       │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ start   │ -p embed-certs-971503 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-971503           │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:50 UTC │
	│ start   │ -p cert-expiration-099863 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-099863       │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ image   │ no-preload-608395 image list --format=json                                                                                                                                                                                                          │ no-preload-608395            │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ pause   │ -p no-preload-608395 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-608395            │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ unpause │ -p no-preload-608395 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-608395            │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ delete  │ -p no-preload-608395                                                                                                                                                                                                                                │ no-preload-608395            │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:50 UTC │
	│ delete  │ -p cert-expiration-099863                                                                                                                                                                                                                           │ cert-expiration-099863       │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:50 UTC │
	│ delete  │ -p no-preload-608395                                                                                                                                                                                                                                │ no-preload-608395            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ delete  │ -p disable-driver-mounts-312087                                                                                                                                                                                                                     │ disable-driver-mounts-312087 │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ start   │ -p default-k8s-diff-port-403602 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-403602 │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │                     │
	│ start   │ -p newest-cni-846862 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ start   │ -p kubernetes-upgrade-358357 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-358357    │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │                     │
	│ start   │ -p kubernetes-upgrade-358357 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-358357    │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ delete  │ -p kubernetes-upgrade-358357                                                                                                                                                                                                                        │ kubernetes-upgrade-358357    │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ start   │ -p auto-355661 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-355661                  │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-846862 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ stop    │ -p newest-cni-846862 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ addons  │ enable dashboard -p newest-cni-846862 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ start   │ -p newest-cni-846862 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:50:42
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:50:42.121825  651882 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:50:42.122161  651882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:50:42.122174  651882 out.go:374] Setting ErrFile to fd 2...
	I1124 13:50:42.122181  651882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:50:42.122400  651882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
	I1124 13:50:42.123105  651882 out.go:368] Setting JSON to false
	I1124 13:50:42.124490  651882 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9181,"bootTime":1763983061,"procs":373,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:50:42.124564  651882 start.go:143] virtualization: kvm guest
	I1124 13:50:42.126953  651882 out.go:179] * [newest-cni-846862] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:50:42.128929  651882 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:50:42.128955  651882 notify.go:221] Checking for updates...
	I1124 13:50:42.132310  651882 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:50:42.133947  651882 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:50:42.135540  651882 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-370498/.minikube
	I1124 13:50:42.137148  651882 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:50:42.138632  651882 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:50:42.140607  651882 config.go:182] Loaded profile config "newest-cni-846862": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:50:42.141361  651882 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:50:42.166706  651882 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:50:42.166821  651882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:50:42.227448  651882 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 13:50:42.216063705 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:50:42.227551  651882 docker.go:319] overlay module found
	I1124 13:50:42.229629  651882 out.go:179] * Using the docker driver based on existing profile
	I1124 13:50:42.231073  651882 start.go:309] selected driver: docker
	I1124 13:50:42.231094  651882 start.go:927] validating driver "docker" against &{Name:newest-cni-846862 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-846862 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:50:42.231208  651882 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:50:42.231895  651882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:50:42.297061  651882 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 13:50:42.287060392 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:50:42.297368  651882 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 13:50:42.297403  651882 cni.go:84] Creating CNI manager for ""
	I1124 13:50:42.297465  651882 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:50:42.297504  651882 start.go:353] cluster config:
	{Name:newest-cni-846862 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-846862 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:50:42.299821  651882 out.go:179] * Starting "newest-cni-846862" primary control-plane node in "newest-cni-846862" cluster
	I1124 13:50:42.301417  651882 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 13:50:42.303070  651882 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:50:42.304544  651882 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 13:50:42.304588  651882 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1124 13:50:42.304613  651882 cache.go:65] Caching tarball of preloaded images
	I1124 13:50:42.304644  651882 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:50:42.304785  651882 preload.go:238] Found /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1124 13:50:42.304838  651882 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1124 13:50:42.305081  651882 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/newest-cni-846862/config.json ...
	I1124 13:50:42.326535  651882 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 13:50:42.326557  651882 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 13:50:42.326575  651882 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:50:42.326616  651882 start.go:360] acquireMachinesLock for newest-cni-846862: {Name:mkc4689539223e2faafe505852e0d71ad6dc6db7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:50:42.326700  651882 start.go:364] duration metric: took 49.739µs to acquireMachinesLock for "newest-cni-846862"
	I1124 13:50:42.326725  651882 start.go:96] Skipping create...Using existing machine configuration
	I1124 13:50:42.326734  651882 fix.go:54] fixHost starting: 
	I1124 13:50:42.327102  651882 cli_runner.go:164] Run: docker container inspect newest-cni-846862 --format={{.State.Status}}
	I1124 13:50:42.345035  651882 fix.go:112] recreateIfNeeded on newest-cni-846862: state=Stopped err=<nil>
	W1124 13:50:42.345075  651882 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 13:50:40.355827  648989 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-355661:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.222668556s)
	I1124 13:50:40.355867  648989 kic.go:203] duration metric: took 5.222872112s to extract preloaded images to volume ...
	W1124 13:50:40.355996  648989 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 13:50:40.356041  648989 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 13:50:40.356095  648989 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:50:40.431653  648989 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-355661 --name auto-355661 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-355661 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-355661 --network auto-355661 --ip 192.168.76.2 --volume auto-355661:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:50:40.831144  648989 cli_runner.go:164] Run: docker container inspect auto-355661 --format={{.State.Running}}
	I1124 13:50:40.853554  648989 cli_runner.go:164] Run: docker container inspect auto-355661 --format={{.State.Status}}
	I1124 13:50:40.875315  648989 cli_runner.go:164] Run: docker exec auto-355661 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:50:40.932447  648989 oci.go:144] the created container "auto-355661" has a running status.
	I1124 13:50:40.932506  648989 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-370498/.minikube/machines/auto-355661/id_rsa...
	I1124 13:50:40.954660  648989 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-370498/.minikube/machines/auto-355661/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:50:40.983051  648989 cli_runner.go:164] Run: docker container inspect auto-355661 --format={{.State.Status}}
	I1124 13:50:41.009900  648989 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:50:41.009938  648989 kic_runner.go:114] Args: [docker exec --privileged auto-355661 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:50:41.064404  648989 cli_runner.go:164] Run: docker container inspect auto-355661 --format={{.State.Status}}
	I1124 13:50:41.087863  648989 machine.go:94] provisionDockerMachine start ...
	I1124 13:50:41.088063  648989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-355661
	I1124 13:50:41.110771  648989 main.go:143] libmachine: Using SSH client type: native
	I1124 13:50:41.111075  648989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1124 13:50:41.111098  648989 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:50:41.111845  648989 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40402->127.0.0.1:33471: read: connection reset by peer
	I1124 13:50:44.261525  648989 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-355661
	
	I1124 13:50:44.261562  648989 ubuntu.go:182] provisioning hostname "auto-355661"
	I1124 13:50:44.261637  648989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-355661
	I1124 13:50:44.280595  648989 main.go:143] libmachine: Using SSH client type: native
	I1124 13:50:44.280951  648989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1124 13:50:44.280972  648989 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-355661 && echo "auto-355661" | sudo tee /etc/hostname
	W1124 13:50:43.506495  639073 node_ready.go:57] node "default-k8s-diff-port-403602" has "Ready":"False" status (will retry)
	W1124 13:50:45.506950  639073 node_ready.go:57] node "default-k8s-diff-port-403602" has "Ready":"False" status (will retry)
	I1124 13:50:44.440279  648989 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-355661
	
	I1124 13:50:44.440363  648989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-355661
	I1124 13:50:44.460139  648989 main.go:143] libmachine: Using SSH client type: native
	I1124 13:50:44.460390  648989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1124 13:50:44.460408  648989 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-355661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-355661/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-355661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:50:44.609344  648989 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:50:44.609380  648989 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-370498/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-370498/.minikube}
	I1124 13:50:44.609407  648989 ubuntu.go:190] setting up certificates
	I1124 13:50:44.609436  648989 provision.go:84] configureAuth start
	I1124 13:50:44.609504  648989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-355661
	I1124 13:50:44.628605  648989 provision.go:143] copyHostCerts
	I1124 13:50:44.628692  648989 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem, removing ...
	I1124 13:50:44.628709  648989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem
	I1124 13:50:44.628803  648989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem (1082 bytes)
	I1124 13:50:44.628964  648989 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem, removing ...
	I1124 13:50:44.628978  648989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem
	I1124 13:50:44.629039  648989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem (1123 bytes)
	I1124 13:50:44.629145  648989 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem, removing ...
	I1124 13:50:44.629157  648989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem
	I1124 13:50:44.629214  648989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem (1675 bytes)
	I1124 13:50:44.629311  648989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem org=jenkins.auto-355661 san=[127.0.0.1 192.168.76.2 auto-355661 localhost minikube]
	I1124 13:50:44.783833  648989 provision.go:177] copyRemoteCerts
	I1124 13:50:44.783921  648989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:50:44.783977  648989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-355661
	I1124 13:50:44.802659  648989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/auto-355661/id_rsa Username:docker}
	I1124 13:50:44.909986  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 13:50:44.933221  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1124 13:50:44.954758  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 13:50:44.974785  648989 provision.go:87] duration metric: took 365.326039ms to configureAuth
	I1124 13:50:44.974819  648989 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:50:44.975019  648989 config.go:182] Loaded profile config "auto-355661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:50:44.975036  648989 machine.go:97] duration metric: took 3.887149716s to provisionDockerMachine
	I1124 13:50:44.975045  648989 client.go:176] duration metric: took 10.421636389s to LocalClient.Create
	I1124 13:50:44.975067  648989 start.go:167] duration metric: took 10.421716261s to libmachine.API.Create "auto-355661"
	I1124 13:50:44.975080  648989 start.go:293] postStartSetup for "auto-355661" (driver="docker")
	I1124 13:50:44.975095  648989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:50:44.975156  648989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:50:44.975207  648989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-355661
	I1124 13:50:44.993898  648989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/auto-355661/id_rsa Username:docker}
	I1124 13:50:45.100724  648989 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:50:45.105028  648989 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:50:45.105060  648989 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:50:45.105073  648989 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/addons for local assets ...
	I1124 13:50:45.105136  648989 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/files for local assets ...
	I1124 13:50:45.105251  648989 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem -> 3741222.pem in /etc/ssl/certs
	I1124 13:50:45.105394  648989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 13:50:45.114344  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:50:45.137975  648989 start.go:296] duration metric: took 162.873069ms for postStartSetup
	I1124 13:50:45.138393  648989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-355661
	I1124 13:50:45.158491  648989 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/config.json ...
	I1124 13:50:45.158764  648989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:50:45.158827  648989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-355661
	I1124 13:50:45.178143  648989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/auto-355661/id_rsa Username:docker}
	I1124 13:50:45.279727  648989 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:50:45.284888  648989 start.go:128] duration metric: took 10.734784311s to createHost
	I1124 13:50:45.284931  648989 start.go:83] releasing machines lock for "auto-355661", held for 10.73507567s
	I1124 13:50:45.285021  648989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-355661
	I1124 13:50:45.303787  648989 ssh_runner.go:195] Run: cat /version.json
	I1124 13:50:45.303838  648989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-355661
	I1124 13:50:45.303889  648989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:50:45.304034  648989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-355661
	I1124 13:50:45.323852  648989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/auto-355661/id_rsa Username:docker}
	I1124 13:50:45.324019  648989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/auto-355661/id_rsa Username:docker}
	I1124 13:50:45.425014  648989 ssh_runner.go:195] Run: systemctl --version
	I1124 13:50:45.485705  648989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:50:45.491273  648989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:50:45.491339  648989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:50:45.521276  648989 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 13:50:45.521303  648989 start.go:496] detecting cgroup driver to use...
	I1124 13:50:45.521335  648989 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 13:50:45.521382  648989 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 13:50:45.537220  648989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 13:50:45.553404  648989 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:50:45.553465  648989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:50:45.571827  648989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:50:45.591089  648989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:50:45.680282  648989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:50:45.777471  648989 docker.go:234] disabling docker service ...
	I1124 13:50:45.777536  648989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:50:45.799305  648989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:50:45.814296  648989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:50:45.910408  648989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:50:46.009028  648989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:50:46.023793  648989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:50:46.040070  648989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 13:50:46.052732  648989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 13:50:46.063321  648989 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 13:50:46.063398  648989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 13:50:46.074310  648989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:50:46.084449  648989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 13:50:46.094858  648989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:50:46.105290  648989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:50:46.115329  648989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 13:50:46.125424  648989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 13:50:46.135630  648989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 13:50:46.146962  648989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:50:46.156057  648989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:50:46.165638  648989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:50:46.251435  648989 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 13:50:46.360319  648989 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 13:50:46.360396  648989 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 13:50:46.365012  648989 start.go:564] Will wait 60s for crictl version
	I1124 13:50:46.365082  648989 ssh_runner.go:195] Run: which crictl
	I1124 13:50:46.369353  648989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:50:46.399383  648989 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 13:50:46.399448  648989 ssh_runner.go:195] Run: containerd --version
	I1124 13:50:46.422950  648989 ssh_runner.go:195] Run: containerd --version
	I1124 13:50:46.449391  648989 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 13:50:42.347095  651882 out.go:252] * Restarting existing docker container for "newest-cni-846862" ...
	I1124 13:50:42.347196  651882 cli_runner.go:164] Run: docker start newest-cni-846862
	I1124 13:50:42.679126  651882 cli_runner.go:164] Run: docker container inspect newest-cni-846862 --format={{.State.Status}}
	I1124 13:50:42.699074  651882 kic.go:430] container "newest-cni-846862" state is running.
	I1124 13:50:42.699651  651882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-846862
	I1124 13:50:42.718209  651882 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/newest-cni-846862/config.json ...
	I1124 13:50:42.718521  651882 machine.go:94] provisionDockerMachine start ...
	I1124 13:50:42.718634  651882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-846862
	I1124 13:50:42.737521  651882 main.go:143] libmachine: Using SSH client type: native
	I1124 13:50:42.737828  651882 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33476 <nil> <nil>}
	I1124 13:50:42.737840  651882 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:50:42.738494  651882 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48808->127.0.0.1:33476: read: connection reset by peer
	I1124 13:50:45.890573  651882 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-846862
	
	I1124 13:50:45.890609  651882 ubuntu.go:182] provisioning hostname "newest-cni-846862"
	I1124 13:50:45.890679  651882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-846862
	I1124 13:50:45.910179  651882 main.go:143] libmachine: Using SSH client type: native
	I1124 13:50:45.910490  651882 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33476 <nil> <nil>}
	I1124 13:50:45.910511  651882 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-846862 && echo "newest-cni-846862" | sudo tee /etc/hostname
	I1124 13:50:46.073585  651882 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-846862
	
	I1124 13:50:46.073669  651882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-846862
	I1124 13:50:46.093717  651882 main.go:143] libmachine: Using SSH client type: native
	I1124 13:50:46.094049  651882 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33476 <nil> <nil>}
	I1124 13:50:46.094072  651882 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-846862' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-846862/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-846862' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:50:46.248634  651882 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:50:46.248666  651882 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-370498/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-370498/.minikube}
	I1124 13:50:46.248728  651882 ubuntu.go:190] setting up certificates
	I1124 13:50:46.248768  651882 provision.go:84] configureAuth start
	I1124 13:50:46.248849  651882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-846862
	I1124 13:50:46.270511  651882 provision.go:143] copyHostCerts
	I1124 13:50:46.270566  651882 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem, removing ...
	I1124 13:50:46.270584  651882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem
	I1124 13:50:46.270643  651882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem (1082 bytes)
	I1124 13:50:46.270761  651882 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem, removing ...
	I1124 13:50:46.270773  651882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem
	I1124 13:50:46.270802  651882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem (1123 bytes)
	I1124 13:50:46.270878  651882 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem, removing ...
	I1124 13:50:46.270890  651882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem
	I1124 13:50:46.270932  651882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem (1675 bytes)
	I1124 13:50:46.271050  651882 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem org=jenkins.newest-cni-846862 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-846862]
	I1124 13:50:46.375825  651882 provision.go:177] copyRemoteCerts
	I1124 13:50:46.375885  651882 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:50:46.375933  651882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-846862
	I1124 13:50:46.397552  651882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/newest-cni-846862/id_rsa Username:docker}
	I1124 13:50:46.503932  651882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 13:50:46.525588  651882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 13:50:46.547966  651882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 13:50:46.570387  651882 provision.go:87] duration metric: took 321.59766ms to configureAuth
	I1124 13:50:46.570418  651882 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:50:46.570668  651882 config.go:182] Loaded profile config "newest-cni-846862": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:50:46.570686  651882 machine.go:97] duration metric: took 3.852145082s to provisionDockerMachine
	I1124 13:50:46.570696  651882 start.go:293] postStartSetup for "newest-cni-846862" (driver="docker")
	I1124 13:50:46.570708  651882 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:50:46.570770  651882 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:50:46.570818  651882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-846862
	I1124 13:50:46.591385  651882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/newest-cni-846862/id_rsa Username:docker}
	I1124 13:50:46.702143  651882 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:50:46.706362  651882 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:50:46.706395  651882 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:50:46.706409  651882 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/addons for local assets ...
	I1124 13:50:46.706460  651882 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/files for local assets ...
	I1124 13:50:46.706546  651882 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem -> 3741222.pem in /etc/ssl/certs
	I1124 13:50:46.706654  651882 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 13:50:46.715261  651882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:50:46.735125  651882 start.go:296] duration metric: took 164.410202ms for postStartSetup
	I1124 13:50:46.735232  651882 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:50:46.735275  651882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-846862
	I1124 13:50:46.755027  651882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/newest-cni-846862/id_rsa Username:docker}
	I1124 13:50:46.858979  651882 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:50:46.864572  651882 fix.go:56] duration metric: took 4.537829174s for fixHost
	I1124 13:50:46.864599  651882 start.go:83] releasing machines lock for "newest-cni-846862", held for 4.537885231s
	I1124 13:50:46.864673  651882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-846862
	I1124 13:50:46.884027  651882 ssh_runner.go:195] Run: cat /version.json
	I1124 13:50:46.884089  651882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-846862
	I1124 13:50:46.884121  651882 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:50:46.884214  651882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-846862
	I1124 13:50:46.903156  651882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/newest-cni-846862/id_rsa Username:docker}
	I1124 13:50:46.904889  651882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/newest-cni-846862/id_rsa Username:docker}
	I1124 13:50:47.065543  651882 ssh_runner.go:195] Run: systemctl --version
	I1124 13:50:47.072862  651882 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:50:47.078127  651882 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:50:47.078202  651882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:50:47.087519  651882 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 13:50:47.087548  651882 start.go:496] detecting cgroup driver to use...
	I1124 13:50:47.087586  651882 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 13:50:47.087641  651882 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 13:50:47.106491  651882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 13:50:47.121750  651882 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:50:47.121822  651882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:50:46.451253  648989 cli_runner.go:164] Run: docker network inspect auto-355661 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:50:46.470209  648989 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 13:50:46.474806  648989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:50:46.486132  648989 kubeadm.go:884] updating cluster {Name:auto-355661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-355661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:50:46.486269  648989 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 13:50:46.486339  648989 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:50:46.516304  648989 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 13:50:46.516326  648989 containerd.go:534] Images already preloaded, skipping extraction
	I1124 13:50:46.516391  648989 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:50:46.545597  648989 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 13:50:46.545624  648989 cache_images.go:86] Images are preloaded, skipping loading
	I1124 13:50:46.545633  648989 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1124 13:50:46.545752  648989 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-355661 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-355661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:50:46.545820  648989 ssh_runner.go:195] Run: sudo crictl info
	I1124 13:50:46.575441  648989 cni.go:84] Creating CNI manager for ""
	I1124 13:50:46.575469  648989 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:50:46.575490  648989 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:50:46.575520  648989 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-355661 NodeName:auto-355661 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:50:46.575721  648989 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "auto-355661"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:50:46.575815  648989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 13:50:46.585434  648989 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 13:50:46.585527  648989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:50:46.595489  648989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1124 13:50:46.612254  648989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:50:46.630217  648989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1124 13:50:46.644676  648989 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:50:46.649232  648989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:50:46.660397  648989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:50:46.753664  648989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:50:46.781806  648989 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661 for IP: 192.168.76.2
	I1124 13:50:46.781834  648989 certs.go:195] generating shared ca certs ...
	I1124 13:50:46.781862  648989 certs.go:227] acquiring lock for ca certs: {Name:mk5874497fda855b1e2ff816147ffdfbc44946ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:50:46.782080  648989 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.key
	I1124 13:50:46.782136  648989 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.key
	I1124 13:50:46.782147  648989 certs.go:257] generating profile certs ...
	I1124 13:50:46.782214  648989 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/client.key
	I1124 13:50:46.782233  648989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/client.crt with IP's: []
	I1124 13:50:46.864531  648989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/client.crt ...
	I1124 13:50:46.864566  648989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/client.crt: {Name:mkd3e24059f35a20f49945b99b9b69f5ef4934a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:50:46.864789  648989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/client.key ...
	I1124 13:50:46.864809  648989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/client.key: {Name:mkf5018446504d7dd904b2d9011f49a8094fea41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:50:46.864955  648989 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.key.94ea11ee
	I1124 13:50:46.864979  648989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.crt.94ea11ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 13:50:46.884694  648989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.crt.94ea11ee ...
	I1124 13:50:46.884728  648989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.crt.94ea11ee: {Name:mk16141a12725a6fcc597223a338945a69dfc26f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:50:46.884936  648989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.key.94ea11ee ...
	I1124 13:50:46.884955  648989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.key.94ea11ee: {Name:mkace1ece8ca39df04e578660ab45b136cea4475 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:50:46.885067  648989 certs.go:382] copying /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.crt.94ea11ee -> /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.crt
	I1124 13:50:46.885185  648989 certs.go:386] copying /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.key.94ea11ee -> /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.key
	I1124 13:50:46.885273  648989 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/proxy-client.key
	I1124 13:50:46.885294  648989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/proxy-client.crt with IP's: []
	I1124 13:50:46.930841  648989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/proxy-client.crt ...
	I1124 13:50:46.930869  648989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/proxy-client.crt: {Name:mkb43906cc9d74e972dbbd959e1e8bed60339465 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:50:46.931108  648989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/proxy-client.key ...
	I1124 13:50:46.931134  648989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/proxy-client.key: {Name:mk10f369f3ab4d5388d6a45f0e5b41235040ba94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:50:46.931410  648989 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122.pem (1338 bytes)
	W1124 13:50:46.931456  648989 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122_empty.pem, impossibly tiny 0 bytes
	I1124 13:50:46.931466  648989 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 13:50:46.931492  648989 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem (1082 bytes)
	I1124 13:50:46.931517  648989 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:50:46.931541  648989 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem (1675 bytes)
	I1124 13:50:46.931581  648989 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:50:46.932310  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:50:46.954721  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 13:50:46.974887  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:50:46.994308  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 13:50:47.015895  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1124 13:50:47.037240  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 13:50:47.058093  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:50:47.079810  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 13:50:47.099802  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122.pem --> /usr/share/ca-certificates/374122.pem (1338 bytes)
	I1124 13:50:47.124855  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /usr/share/ca-certificates/3741222.pem (1708 bytes)
	I1124 13:50:47.144975  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:50:47.168266  648989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:50:47.186289  648989 ssh_runner.go:195] Run: openssl version
	I1124 13:50:47.196342  648989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/374122.pem && ln -fs /usr/share/ca-certificates/374122.pem /etc/ssl/certs/374122.pem"
	I1124 13:50:47.206522  648989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/374122.pem
	I1124 13:50:47.211167  648989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:20 /usr/share/ca-certificates/374122.pem
	I1124 13:50:47.211236  648989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/374122.pem
	I1124 13:50:47.249619  648989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/374122.pem /etc/ssl/certs/51391683.0"
	I1124 13:50:47.259905  648989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3741222.pem && ln -fs /usr/share/ca-certificates/3741222.pem /etc/ssl/certs/3741222.pem"
	I1124 13:50:47.273689  648989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3741222.pem
	I1124 13:50:47.279800  648989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:20 /usr/share/ca-certificates/3741222.pem
	I1124 13:50:47.279872  648989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3741222.pem
	I1124 13:50:47.316446  648989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3741222.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 13:50:47.326770  648989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:50:47.336300  648989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:50:47.341033  648989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:50:47.341102  648989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:50:47.383825  648989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:50:47.398335  648989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:50:47.403148  648989 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:50:47.403203  648989 kubeadm.go:401] StartCluster: {Name:auto-355661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-355661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:50:47.403296  648989 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 13:50:47.403357  648989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:50:47.436111  648989 cri.go:89] found id: ""
	I1124 13:50:47.436210  648989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:50:47.446133  648989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:50:47.455407  648989 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 13:50:47.455486  648989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:50:47.464355  648989 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:50:47.464377  648989 kubeadm.go:158] found existing configuration files:
	
	I1124 13:50:47.464440  648989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:50:47.473268  648989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:50:47.473348  648989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:50:47.485942  648989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:50:47.496775  648989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:50:47.496837  648989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:50:47.506132  648989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:50:47.515713  648989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:50:47.515793  648989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:50:47.524588  648989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:50:47.533827  648989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:50:47.534026  648989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:50:47.543012  648989 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 13:50:47.589659  648989 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 13:50:47.589709  648989 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:50:47.624899  648989 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 13:50:47.625073  648989 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 13:50:47.625132  648989 kubeadm.go:319] OS: Linux
	I1124 13:50:47.625196  648989 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 13:50:47.625265  648989 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 13:50:47.625332  648989 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 13:50:47.625405  648989 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 13:50:47.625502  648989 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 13:50:47.625586  648989 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 13:50:47.625648  648989 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 13:50:47.625709  648989 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 13:50:47.696420  648989 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:50:47.696568  648989 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:50:47.696736  648989 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 13:50:47.703161  648989 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:50:47.138526  651882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:50:47.153295  651882 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:50:47.242319  651882 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:50:47.324562  651882 docker.go:234] disabling docker service ...
	I1124 13:50:47.324637  651882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:50:47.341049  651882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:50:47.356938  651882 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:50:47.452582  651882 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:50:47.544177  651882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:50:47.560293  651882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:50:47.577441  651882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 13:50:47.588652  651882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 13:50:47.600054  651882 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 13:50:47.600132  651882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 13:50:47.611886  651882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:50:47.623205  651882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 13:50:47.636094  651882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:50:47.647489  651882 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:50:47.658581  651882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 13:50:47.669878  651882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 13:50:47.680767  651882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 13:50:47.691087  651882 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:50:47.701276  651882 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:50:47.711355  651882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:50:47.796448  651882 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 13:50:47.920418  651882 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 13:50:47.920490  651882 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 13:50:47.925233  651882 start.go:564] Will wait 60s for crictl version
	I1124 13:50:47.925295  651882 ssh_runner.go:195] Run: which crictl
	I1124 13:50:47.929614  651882 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:50:47.959230  651882 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 13:50:47.959305  651882 ssh_runner.go:195] Run: containerd --version
	I1124 13:50:47.983458  651882 ssh_runner.go:195] Run: containerd --version
	I1124 13:50:48.010800  651882 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 13:50:48.012433  651882 cli_runner.go:164] Run: docker network inspect newest-cni-846862 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:50:48.030515  651882 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 13:50:48.035097  651882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:50:48.048936  651882 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	a9ab60408f323       56cc512116c8f       7 seconds ago       Running             busybox                   0                   c5d92459bd00b       busybox                                      default
	03fe961e764d7       52546a367cc9e       13 seconds ago      Running             coredns                   0                   38bcb7e597a37       coredns-66bc5c9577-rn6dx                     kube-system
	902e0b827ac38       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   88128ddc54ac4       storage-provisioner                          kube-system
	5914c57b066be       409467f978b4a       24 seconds ago      Running             kindnet-cni               0                   c2cb83d96d081       kindnet-sq6tm                                kube-system
	2070bf9a47086       fc25172553d79       25 seconds ago      Running             kube-proxy                0                   05dffec676416       kube-proxy-6v565                             kube-system
	85c949a723925       c80c8dbafe7dd       36 seconds ago      Running             kube-controller-manager   0                   a1892275d242f       kube-controller-manager-embed-certs-971503   kube-system
	4d4c96dea1c1c       c3994bc696102       36 seconds ago      Running             kube-apiserver            0                   9ff27781ab7f8       kube-apiserver-embed-certs-971503            kube-system
	c163234ff7ad3       7dd6aaa1717ab       36 seconds ago      Running             kube-scheduler            0                   01d5c3ce57e02       kube-scheduler-embed-certs-971503            kube-system
	ffd3e36daeb72       5f1f5298c888d       36 seconds ago      Running             etcd                      0                   0c01bc25b98c3       etcd-embed-certs-971503                      kube-system
	
	
	==> containerd <==
	Nov 24 13:50:36 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:36.498673622Z" level=info msg="StartContainer for \"902e0b827ac388051eac7d9f68d5880c4b79476d24f7f6056d599b6adeab7723\""
	Nov 24 13:50:36 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:36.500108765Z" level=info msg="connecting to shim 902e0b827ac388051eac7d9f68d5880c4b79476d24f7f6056d599b6adeab7723" address="unix:///run/containerd/s/aed2f838531fc051aae5f30a4ebdae656b0b8dae5aa68e7b19371385baca70ec" protocol=ttrpc version=3
	Nov 24 13:50:36 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:36.505889733Z" level=info msg="CreateContainer within sandbox \"38bcb7e597a3798bd1c14e1053e62d1375ed6ef1c3b634b8f17b54da9be12785\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 24 13:50:36 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:36.525272553Z" level=info msg="Container 03fe961e764d7c8628b5c71ccc3bd4901bd840cb6e683e5795c4f4414f39b122: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 13:50:36 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:36.540886465Z" level=info msg="CreateContainer within sandbox \"38bcb7e597a3798bd1c14e1053e62d1375ed6ef1c3b634b8f17b54da9be12785\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"03fe961e764d7c8628b5c71ccc3bd4901bd840cb6e683e5795c4f4414f39b122\""
	Nov 24 13:50:36 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:36.545314825Z" level=info msg="StartContainer for \"03fe961e764d7c8628b5c71ccc3bd4901bd840cb6e683e5795c4f4414f39b122\""
	Nov 24 13:50:36 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:36.546542695Z" level=info msg="connecting to shim 03fe961e764d7c8628b5c71ccc3bd4901bd840cb6e683e5795c4f4414f39b122" address="unix:///run/containerd/s/57789ae6c48c2d5970e6abf243c6217948592f7b200de2842d7df5688bff575f" protocol=ttrpc version=3
	Nov 24 13:50:36 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:36.632243833Z" level=info msg="StartContainer for \"902e0b827ac388051eac7d9f68d5880c4b79476d24f7f6056d599b6adeab7723\" returns successfully"
	Nov 24 13:50:36 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:36.644888424Z" level=info msg="StartContainer for \"03fe961e764d7c8628b5c71ccc3bd4901bd840cb6e683e5795c4f4414f39b122\" returns successfully"
	Nov 24 13:50:40 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:40.306192122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3f97cea3-5c5e-42af-99d5-9f7a1a3f7dcc,Namespace:default,Attempt:0,}"
	Nov 24 13:50:40 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:40.390138758Z" level=info msg="connecting to shim c5d92459bd00b6cd97d7596fce69259de60b42e27cb4e0ce577931f91218ebe8" address="unix:///run/containerd/s/c7eec423d317874953f67e786b9cc9eeac10d0633f922281da92fc7f6d52dee9" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 13:50:40 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:40.473810719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3f97cea3-5c5e-42af-99d5-9f7a1a3f7dcc,Namespace:default,Attempt:0,} returns sandbox id \"c5d92459bd00b6cd97d7596fce69259de60b42e27cb4e0ce577931f91218ebe8\""
	Nov 24 13:50:40 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:40.476459295Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.512077492Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.512842081Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396641"
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.514367616Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.518128824Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.518984328Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.042472881s"
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.519040677Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.524304727Z" level=info msg="CreateContainer within sandbox \"c5d92459bd00b6cd97d7596fce69259de60b42e27cb4e0ce577931f91218ebe8\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.532905722Z" level=info msg="Container a9ab60408f3233d2440967de0e8ea69eb28b13ab6543f1e8c5b922c4b0a15eb3: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.541301596Z" level=info msg="CreateContainer within sandbox \"c5d92459bd00b6cd97d7596fce69259de60b42e27cb4e0ce577931f91218ebe8\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"a9ab60408f3233d2440967de0e8ea69eb28b13ab6543f1e8c5b922c4b0a15eb3\""
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.542196865Z" level=info msg="StartContainer for \"a9ab60408f3233d2440967de0e8ea69eb28b13ab6543f1e8c5b922c4b0a15eb3\""
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.543218914Z" level=info msg="connecting to shim a9ab60408f3233d2440967de0e8ea69eb28b13ab6543f1e8c5b922c4b0a15eb3" address="unix:///run/containerd/s/c7eec423d317874953f67e786b9cc9eeac10d0633f922281da92fc7f6d52dee9" protocol=ttrpc version=3
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.600897748Z" level=info msg="StartContainer for \"a9ab60408f3233d2440967de0e8ea69eb28b13ab6543f1e8c5b922c4b0a15eb3\" returns successfully"
	
	
	==> coredns [03fe961e764d7c8628b5c71ccc3bd4901bd840cb6e683e5795c4f4414f39b122] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57449 - 39200 "HINFO IN 8583295162172501320.1839365236584150202. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.100038879s
	
	
	==> describe nodes <==
	Name:               embed-certs-971503
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-971503
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=embed-certs-971503
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_50_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:50:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-971503
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:50:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:50:49 +0000   Mon, 24 Nov 2025 13:50:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:50:49 +0000   Mon, 24 Nov 2025 13:50:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:50:49 +0000   Mon, 24 Nov 2025 13:50:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:50:49 +0000   Mon, 24 Nov 2025 13:50:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-971503
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                960977f5-8c2d-4dbc-a619-abd3283e065f
	  Boot ID:                    715d4626-373f-499b-b5de-b6d832ce4fe4
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-rn6dx                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-embed-certs-971503                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-sq6tm                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-embed-certs-971503             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-embed-certs-971503    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-6v565                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-embed-certs-971503             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  38s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  37s (x8 over 38s)  kubelet          Node embed-certs-971503 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 38s)  kubelet          Node embed-certs-971503 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x7 over 38s)  kubelet          Node embed-certs-971503 status is now: NodeHasSufficientPID
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  32s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  32s                kubelet          Node embed-certs-971503 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s                kubelet          Node embed-certs-971503 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s                kubelet          Node embed-certs-971503 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node embed-certs-971503 event: Registered Node embed-certs-971503 in Controller
	  Normal  NodeReady                15s                kubelet          Node embed-certs-971503 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 91 30 bc 58 af 08 06
	[Nov24 12:45] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a fb 84 7d 9e 9e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0a 91 30 bc 58 af 08 06
	[ +25.292047] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff da 14 b4 9b 3e 8f 08 06
	[  +0.024207] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 06 8e 71 0b 76 c3 08 06
	[ +16.768103] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 45 b6 ad fe 93 08 06
	[  +5.950770] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e b5 4a 70 0a 35 08 06
	[Nov24 12:46] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e 8b d0 4a da 7e 08 06
	[  +0.000557] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e b5 4a 70 0a 35 08 06
	[  +1.903671] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 1f e8 fc 59 74 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da 14 b4 9b 3e 8f 08 06
	[ +17.535584] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 31 ec 7c 1d 38 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 45 b6 ad fe 93 08 06
	
	
	==> etcd [ffd3e36daeb7225019c06b3e57efdea55f1463f1d72e997c0f78f1bf1d568f51] <==
	{"level":"warn","ts":"2025-11-24T13:50:15.162231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.194361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.223289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.237057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.245608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.255835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.265305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.275888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.284773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.293101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.302086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.318454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.322634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.331586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.342902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.409789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46220","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T13:50:24.640989Z","caller":"traceutil/trace.go:172","msg":"trace[784258116] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"103.368273ms","start":"2025-11-24T13:50:24.537595Z","end":"2025-11-24T13:50:24.640963Z","steps":["trace[784258116] 'process raft request'  (duration: 103.212478ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:50:39.583450Z","caller":"traceutil/trace.go:172","msg":"trace[20253060] linearizableReadLoop","detail":"{readStateIndex:476; appliedIndex:476; }","duration":"144.322539ms","start":"2025-11-24T13:50:39.439099Z","end":"2025-11-24T13:50:39.583422Z","steps":["trace[20253060] 'read index received'  (duration: 144.311573ms)","trace[20253060] 'applied index is now lower than readState.Index'  (duration: 9.645µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T13:50:39.583589Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"144.455158ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T13:50:39.583679Z","caller":"traceutil/trace.go:172","msg":"trace[119140308] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:461; }","duration":"144.572561ms","start":"2025-11-24T13:50:39.439093Z","end":"2025-11-24T13:50:39.583666Z","steps":["trace[119140308] 'agreement among raft nodes before linearized reading'  (duration: 144.412549ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T13:50:39.583777Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"142.011915ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T13:50:39.583833Z","caller":"traceutil/trace.go:172","msg":"trace[1827108258] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:462; }","duration":"142.079218ms","start":"2025-11-24T13:50:39.441743Z","end":"2025-11-24T13:50:39.583822Z","steps":["trace[1827108258] 'agreement among raft nodes before linearized reading'  (duration: 141.988852ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:50:39.583832Z","caller":"traceutil/trace.go:172","msg":"trace[1806007110] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"155.737315ms","start":"2025-11-24T13:50:39.428080Z","end":"2025-11-24T13:50:39.583817Z","steps":["trace[1806007110] 'process raft request'  (duration: 155.37479ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:50:39.729093Z","caller":"traceutil/trace.go:172","msg":"trace[1405680079] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"141.818569ms","start":"2025-11-24T13:50:39.587244Z","end":"2025-11-24T13:50:39.729063Z","steps":["trace[1405680079] 'process raft request'  (duration: 127.397287ms)","trace[1405680079] 'compare'  (duration: 14.202296ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T13:50:40.006812Z","caller":"traceutil/trace.go:172","msg":"trace[1510918285] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"132.558219ms","start":"2025-11-24T13:50:39.874232Z","end":"2025-11-24T13:50:40.006791Z","steps":["trace[1510918285] 'process raft request'  (duration: 132.441084ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:50:50 up  2:33,  0 user,  load average: 5.41, 3.61, 2.31
	Linux embed-certs-971503 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5914c57b066be204389d90bfe7aeb5e3db92f6c228983299bb27fea23671aace] <==
	I1124 13:50:25.551982       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 13:50:25.552463       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 13:50:25.552792       1 main.go:148] setting mtu 1500 for CNI 
	I1124 13:50:25.552953       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 13:50:25.552985       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T13:50:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 13:50:25.874069       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 13:50:25.874176       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 13:50:25.874196       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 13:50:25.884325       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 13:50:26.076334       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 13:50:26.076385       1 metrics.go:72] Registering metrics
	I1124 13:50:26.076449       1 controller.go:711] "Syncing nftables rules"
	I1124 13:50:35.865234       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:50:35.865299       1 main.go:301] handling current node
	I1124 13:50:45.864948       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:50:45.865013       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4d4c96dea1c1ca7b866ccee2653eabf5ae5fd0a8eeb603e57a8901e9d474ccf3] <==
	E1124 13:50:16.099032       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1124 13:50:16.142473       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 13:50:16.147733       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 13:50:16.150611       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:50:16.160390       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:50:16.160757       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 13:50:16.274906       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 13:50:16.946221       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 13:50:16.954311       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 13:50:16.954329       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 13:50:17.887861       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 13:50:17.936349       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 13:50:17.990065       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 13:50:18.054449       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 13:50:18.062795       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1124 13:50:18.064142       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 13:50:18.070417       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 13:50:18.866890       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 13:50:18.879824       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 13:50:18.892093       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 13:50:23.093639       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 13:50:23.897080       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:50:23.902647       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:50:24.102276       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1124 13:50:48.572011       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:48634: use of closed network connection
	
	
	==> kube-controller-manager [85c949a723925862fb7aea2e303b50f684e0ffbc8e97734a1fa52293509d4ae6] <==
	I1124 13:50:22.991369       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 13:50:22.992324       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 13:50:22.993204       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 13:50:22.993216       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 13:50:22.994370       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 13:50:22.994390       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 13:50:22.995976       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:50:22.997091       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 13:50:22.997178       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 13:50:22.997251       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 13:50:22.997260       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 13:50:22.997267       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 13:50:23.000337       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 13:50:23.008384       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-971503" podCIDRs=["10.244.0.0/24"]
	I1124 13:50:23.015391       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 13:50:23.015518       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 13:50:23.015959       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 13:50:23.016323       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 13:50:23.018483       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:50:23.019690       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 13:50:23.019825       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 13:50:23.019972       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-971503"
	I1124 13:50:23.020048       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 13:50:23.028392       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 13:50:38.022657       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2070bf9a4708666ced634ffb7847907fc0d7071448fb6af6d357d643fba294b2] <==
	I1124 13:50:25.094123       1 server_linux.go:53] "Using iptables proxy"
	I1124 13:50:25.147348       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 13:50:25.248959       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 13:50:25.249016       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1124 13:50:25.249117       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 13:50:25.300868       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:50:25.301013       1 server_linux.go:132] "Using iptables Proxier"
	I1124 13:50:25.323034       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 13:50:25.324134       1 server.go:527] "Version info" version="v1.34.1"
	I1124 13:50:25.324292       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:50:25.331372       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 13:50:25.331473       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 13:50:25.331495       1 config.go:200] "Starting service config controller"
	I1124 13:50:25.333717       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 13:50:25.331695       1 config.go:106] "Starting endpoint slice config controller"
	I1124 13:50:25.334209       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 13:50:25.334895       1 config.go:309] "Starting node config controller"
	I1124 13:50:25.335688       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 13:50:25.336035       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 13:50:25.433553       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 13:50:25.434806       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 13:50:25.434966       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c163234ff7ad3dc9ac0841e5d5172ff77e045691de7b1aab98c5df56611d396c] <==
	E1124 13:50:16.079872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 13:50:16.079889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:50:16.079765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:50:16.080108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 13:50:16.080282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 13:50:16.887614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 13:50:16.988454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 13:50:16.992169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:50:17.001643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:50:17.101626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 13:50:17.141857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:50:17.181257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:50:17.185129       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 13:50:17.222757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 13:50:17.263405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 13:50:17.366368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:50:17.380294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:50:17.419199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 13:50:17.428685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:50:17.466295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:50:17.588138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 13:50:17.610734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 13:50:17.614406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:50:17.618752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1124 13:50:20.572735       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 13:50:19 embed-certs-971503 kubelet[1447]: I1124 13:50:19.781361    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-971503" podStartSLOduration=1.781337789 podStartE2EDuration="1.781337789s" podCreationTimestamp="2025-11-24 13:50:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:50:19.781294102 +0000 UTC m=+1.170245131" watchObservedRunningTime="2025-11-24 13:50:19.781337789 +0000 UTC m=+1.170288801"
	Nov 24 13:50:19 embed-certs-971503 kubelet[1447]: E1124 13:50:19.783254    1447 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-embed-certs-971503\" already exists" pod="kube-system/etcd-embed-certs-971503"
	Nov 24 13:50:19 embed-certs-971503 kubelet[1447]: E1124 13:50:19.784231    1447 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-embed-certs-971503\" already exists" pod="kube-system/kube-scheduler-embed-certs-971503"
	Nov 24 13:50:19 embed-certs-971503 kubelet[1447]: E1124 13:50:19.784233    1447 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-embed-certs-971503\" already exists" pod="kube-system/kube-apiserver-embed-certs-971503"
	Nov 24 13:50:23 embed-certs-971503 kubelet[1447]: I1124 13:50:23.031266    1447 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 13:50:23 embed-certs-971503 kubelet[1447]: I1124 13:50:23.032127    1447 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 13:50:24 embed-certs-971503 kubelet[1447]: I1124 13:50:24.252000    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c305a92d-fa9b-4b8a-baf4-d95e66619f08-lib-modules\") pod \"kube-proxy-6v565\" (UID: \"c305a92d-fa9b-4b8a-baf4-d95e66619f08\") " pod="kube-system/kube-proxy-6v565"
	Nov 24 13:50:24 embed-certs-971503 kubelet[1447]: I1124 13:50:24.252060    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dfae42e1-154e-45f7-b1bd-86d3826cf4bf-cni-cfg\") pod \"kindnet-sq6tm\" (UID: \"dfae42e1-154e-45f7-b1bd-86d3826cf4bf\") " pod="kube-system/kindnet-sq6tm"
	Nov 24 13:50:24 embed-certs-971503 kubelet[1447]: I1124 13:50:24.252099    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9bdj\" (UniqueName: \"kubernetes.io/projected/dfae42e1-154e-45f7-b1bd-86d3826cf4bf-kube-api-access-q9bdj\") pod \"kindnet-sq6tm\" (UID: \"dfae42e1-154e-45f7-b1bd-86d3826cf4bf\") " pod="kube-system/kindnet-sq6tm"
	Nov 24 13:50:24 embed-certs-971503 kubelet[1447]: I1124 13:50:24.252122    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c305a92d-fa9b-4b8a-baf4-d95e66619f08-kube-proxy\") pod \"kube-proxy-6v565\" (UID: \"c305a92d-fa9b-4b8a-baf4-d95e66619f08\") " pod="kube-system/kube-proxy-6v565"
	Nov 24 13:50:24 embed-certs-971503 kubelet[1447]: I1124 13:50:24.252143    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c305a92d-fa9b-4b8a-baf4-d95e66619f08-xtables-lock\") pod \"kube-proxy-6v565\" (UID: \"c305a92d-fa9b-4b8a-baf4-d95e66619f08\") " pod="kube-system/kube-proxy-6v565"
	Nov 24 13:50:24 embed-certs-971503 kubelet[1447]: I1124 13:50:24.252182    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94gkz\" (UniqueName: \"kubernetes.io/projected/c305a92d-fa9b-4b8a-baf4-d95e66619f08-kube-api-access-94gkz\") pod \"kube-proxy-6v565\" (UID: \"c305a92d-fa9b-4b8a-baf4-d95e66619f08\") " pod="kube-system/kube-proxy-6v565"
	Nov 24 13:50:24 embed-certs-971503 kubelet[1447]: I1124 13:50:24.252203    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfae42e1-154e-45f7-b1bd-86d3826cf4bf-xtables-lock\") pod \"kindnet-sq6tm\" (UID: \"dfae42e1-154e-45f7-b1bd-86d3826cf4bf\") " pod="kube-system/kindnet-sq6tm"
	Nov 24 13:50:24 embed-certs-971503 kubelet[1447]: I1124 13:50:24.252235    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfae42e1-154e-45f7-b1bd-86d3826cf4bf-lib-modules\") pod \"kindnet-sq6tm\" (UID: \"dfae42e1-154e-45f7-b1bd-86d3826cf4bf\") " pod="kube-system/kindnet-sq6tm"
	Nov 24 13:50:25 embed-certs-971503 kubelet[1447]: I1124 13:50:25.820468    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6v565" podStartSLOduration=1.8204401639999999 podStartE2EDuration="1.820440164s" podCreationTimestamp="2025-11-24 13:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:50:25.817394098 +0000 UTC m=+7.206345131" watchObservedRunningTime="2025-11-24 13:50:25.820440164 +0000 UTC m=+7.209391194"
	Nov 24 13:50:25 embed-certs-971503 kubelet[1447]: I1124 13:50:25.869086    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-sq6tm" podStartSLOduration=1.869064336 podStartE2EDuration="1.869064336s" podCreationTimestamp="2025-11-24 13:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:50:25.846243674 +0000 UTC m=+7.235194704" watchObservedRunningTime="2025-11-24 13:50:25.869064336 +0000 UTC m=+7.258015368"
	Nov 24 13:50:35 embed-certs-971503 kubelet[1447]: I1124 13:50:35.962971    1447 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 13:50:36 embed-certs-971503 kubelet[1447]: I1124 13:50:36.037657    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/decde4d2-8595-422a-a5e9-8f5b2019833e-config-volume\") pod \"coredns-66bc5c9577-rn6dx\" (UID: \"decde4d2-8595-422a-a5e9-8f5b2019833e\") " pod="kube-system/coredns-66bc5c9577-rn6dx"
	Nov 24 13:50:36 embed-certs-971503 kubelet[1447]: I1124 13:50:36.037699    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5r4r\" (UniqueName: \"kubernetes.io/projected/decde4d2-8595-422a-a5e9-8f5b2019833e-kube-api-access-w5r4r\") pod \"coredns-66bc5c9577-rn6dx\" (UID: \"decde4d2-8595-422a-a5e9-8f5b2019833e\") " pod="kube-system/coredns-66bc5c9577-rn6dx"
	Nov 24 13:50:36 embed-certs-971503 kubelet[1447]: I1124 13:50:36.037724    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jhc6\" (UniqueName: \"kubernetes.io/projected/faebcb7e-12bd-45e5-a6f6-420848719e73-kube-api-access-7jhc6\") pod \"storage-provisioner\" (UID: \"faebcb7e-12bd-45e5-a6f6-420848719e73\") " pod="kube-system/storage-provisioner"
	Nov 24 13:50:36 embed-certs-971503 kubelet[1447]: I1124 13:50:36.037741    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/faebcb7e-12bd-45e5-a6f6-420848719e73-tmp\") pod \"storage-provisioner\" (UID: \"faebcb7e-12bd-45e5-a6f6-420848719e73\") " pod="kube-system/storage-provisioner"
	Nov 24 13:50:36 embed-certs-971503 kubelet[1447]: I1124 13:50:36.898491    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rn6dx" podStartSLOduration=12.898464625999999 podStartE2EDuration="12.898464626s" podCreationTimestamp="2025-11-24 13:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:50:36.8654932 +0000 UTC m=+18.254444230" watchObservedRunningTime="2025-11-24 13:50:36.898464626 +0000 UTC m=+18.287415659"
	Nov 24 13:50:39 embed-certs-971503 kubelet[1447]: I1124 13:50:39.585320    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.58529448 podStartE2EDuration="15.58529448s" podCreationTimestamp="2025-11-24 13:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:50:36.940747632 +0000 UTC m=+18.329698675" watchObservedRunningTime="2025-11-24 13:50:39.58529448 +0000 UTC m=+20.974245511"
	Nov 24 13:50:39 embed-certs-971503 kubelet[1447]: I1124 13:50:39.864816    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47vm9\" (UniqueName: \"kubernetes.io/projected/3f97cea3-5c5e-42af-99d5-9f7a1a3f7dcc-kube-api-access-47vm9\") pod \"busybox\" (UID: \"3f97cea3-5c5e-42af-99d5-9f7a1a3f7dcc\") " pod="default/busybox"
	Nov 24 13:50:42 embed-certs-971503 kubelet[1447]: I1124 13:50:42.885214    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.840835744 podStartE2EDuration="3.885193289s" podCreationTimestamp="2025-11-24 13:50:39 +0000 UTC" firstStartedPulling="2025-11-24 13:50:40.475809678 +0000 UTC m=+21.864760691" lastFinishedPulling="2025-11-24 13:50:42.520167213 +0000 UTC m=+23.909118236" observedRunningTime="2025-11-24 13:50:42.885008365 +0000 UTC m=+24.273959395" watchObservedRunningTime="2025-11-24 13:50:42.885193289 +0000 UTC m=+24.274144318"
	
	
	==> storage-provisioner [902e0b827ac388051eac7d9f68d5880c4b79476d24f7f6056d599b6adeab7723] <==
	I1124 13:50:36.643497       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 13:50:36.662465       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 13:50:36.662642       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 13:50:36.666666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:36.677628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 13:50:36.677819       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 13:50:36.678078       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-971503_69704622-0a0d-4cc9-a4e7-07c848af476e!
	I1124 13:50:36.678141       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"92a66980-4b02-4382-ab72-46a2cccd67dc", APIVersion:"v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-971503_69704622-0a0d-4cc9-a4e7-07c848af476e became leader
	W1124 13:50:36.692305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:36.700713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 13:50:36.779034       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-971503_69704622-0a0d-4cc9-a4e7-07c848af476e!
	W1124 13:50:38.706049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:38.797087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:40.800604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:40.806518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:42.810164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:42.814420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:44.818471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:44.823113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:46.827317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:46.832748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:48.836694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:48.841769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-971503 -n embed-certs-971503
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-971503 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-971503
helpers_test.go:243: (dbg) docker inspect embed-certs-971503:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1974ba44039b4457f1634f00ecd3b03b26eab33684498fe62152504698f0baf2",
	        "Created": "2025-11-24T13:49:58.810032472Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 637504,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:49:58.858286336Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/1974ba44039b4457f1634f00ecd3b03b26eab33684498fe62152504698f0baf2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1974ba44039b4457f1634f00ecd3b03b26eab33684498fe62152504698f0baf2/hostname",
	        "HostsPath": "/var/lib/docker/containers/1974ba44039b4457f1634f00ecd3b03b26eab33684498fe62152504698f0baf2/hosts",
	        "LogPath": "/var/lib/docker/containers/1974ba44039b4457f1634f00ecd3b03b26eab33684498fe62152504698f0baf2/1974ba44039b4457f1634f00ecd3b03b26eab33684498fe62152504698f0baf2-json.log",
	        "Name": "/embed-certs-971503",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-971503:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-971503",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1974ba44039b4457f1634f00ecd3b03b26eab33684498fe62152504698f0baf2",
	                "LowerDir": "/var/lib/docker/overlay2/3fccab807900f71d48edb071c3cc12aa6ab08c6868d12372bae8553c81a35f4a-init/diff:/var/lib/docker/overlay2/0f013e03fd0eaee4efc608fb0376e7d3e8ba628388f5191310c2259ab273ad26/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3fccab807900f71d48edb071c3cc12aa6ab08c6868d12372bae8553c81a35f4a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3fccab807900f71d48edb071c3cc12aa6ab08c6868d12372bae8553c81a35f4a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3fccab807900f71d48edb071c3cc12aa6ab08c6868d12372bae8553c81a35f4a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-971503",
	                "Source": "/var/lib/docker/volumes/embed-certs-971503/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-971503",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-971503",
	                "name.minikube.sigs.k8s.io": "embed-certs-971503",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "46c1be6c4a896de5cb42b792366beae66ff9d81d76d5061910d35fe4f58e9211",
	            "SandboxKey": "/var/run/docker/netns/46c1be6c4a89",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-971503": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "126edd368031c93a1e03fe1e5d53e7ff92fac3cd9bbf73b49a1b9d47979d9142",
	                    "EndpointID": "7dc19034c85b8f8b0057144eea75257b752635826d4c117a986ef3d3445b1853",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "3e:c6:3c:93:78:9a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-971503",
	                        "1974ba44039b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-971503 -n embed-certs-971503
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-971503 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-971503 logs -n 25: (1.472365603s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p no-preload-608395 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-608395            │ jenkins │ v1.37.0 │ 24 Nov 25 13:48 UTC │ 24 Nov 25 13:49 UTC │
	│ image   │ old-k8s-version-513442 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-513442       │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ pause   │ -p old-k8s-version-513442 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-513442       │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ unpause │ -p old-k8s-version-513442 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-513442       │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ delete  │ -p old-k8s-version-513442                                                                                                                                                                                                                           │ old-k8s-version-513442       │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ delete  │ -p old-k8s-version-513442                                                                                                                                                                                                                           │ old-k8s-version-513442       │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ start   │ -p embed-certs-971503 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-971503           │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:50 UTC │
	│ start   │ -p cert-expiration-099863 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-099863       │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ image   │ no-preload-608395 image list --format=json                                                                                                                                                                                                          │ no-preload-608395            │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ pause   │ -p no-preload-608395 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-608395            │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ unpause │ -p no-preload-608395 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-608395            │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ delete  │ -p no-preload-608395                                                                                                                                                                                                                                │ no-preload-608395            │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:50 UTC │
	│ delete  │ -p cert-expiration-099863                                                                                                                                                                                                                           │ cert-expiration-099863       │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:50 UTC │
	│ delete  │ -p no-preload-608395                                                                                                                                                                                                                                │ no-preload-608395            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ delete  │ -p disable-driver-mounts-312087                                                                                                                                                                                                                     │ disable-driver-mounts-312087 │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ start   │ -p default-k8s-diff-port-403602 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-403602 │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │                     │
	│ start   │ -p newest-cni-846862 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ start   │ -p kubernetes-upgrade-358357 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-358357    │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │                     │
	│ start   │ -p kubernetes-upgrade-358357 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-358357    │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ delete  │ -p kubernetes-upgrade-358357                                                                                                                                                                                                                        │ kubernetes-upgrade-358357    │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ start   │ -p auto-355661 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-355661                  │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-846862 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ stop    │ -p newest-cni-846862 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ addons  │ enable dashboard -p newest-cni-846862 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ start   │ -p newest-cni-846862 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:50:42
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:50:42.121825  651882 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:50:42.122161  651882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:50:42.122174  651882 out.go:374] Setting ErrFile to fd 2...
	I1124 13:50:42.122181  651882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:50:42.122400  651882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
	I1124 13:50:42.123105  651882 out.go:368] Setting JSON to false
	I1124 13:50:42.124490  651882 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9181,"bootTime":1763983061,"procs":373,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:50:42.124564  651882 start.go:143] virtualization: kvm guest
	I1124 13:50:42.126953  651882 out.go:179] * [newest-cni-846862] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:50:42.128929  651882 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:50:42.128955  651882 notify.go:221] Checking for updates...
	I1124 13:50:42.132310  651882 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:50:42.133947  651882 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:50:42.135540  651882 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-370498/.minikube
	I1124 13:50:42.137148  651882 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:50:42.138632  651882 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:50:42.140607  651882 config.go:182] Loaded profile config "newest-cni-846862": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:50:42.141361  651882 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:50:42.166706  651882 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:50:42.166821  651882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:50:42.227448  651882 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 13:50:42.216063705 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:50:42.227551  651882 docker.go:319] overlay module found
	I1124 13:50:42.229629  651882 out.go:179] * Using the docker driver based on existing profile
	I1124 13:50:42.231073  651882 start.go:309] selected driver: docker
	I1124 13:50:42.231094  651882 start.go:927] validating driver "docker" against &{Name:newest-cni-846862 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-846862 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:50:42.231208  651882 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:50:42.231895  651882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:50:42.297061  651882 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 13:50:42.287060392 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:50:42.297368  651882 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 13:50:42.297403  651882 cni.go:84] Creating CNI manager for ""
	I1124 13:50:42.297465  651882 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:50:42.297504  651882 start.go:353] cluster config:
	{Name:newest-cni-846862 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-846862 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:50:42.299821  651882 out.go:179] * Starting "newest-cni-846862" primary control-plane node in "newest-cni-846862" cluster
	I1124 13:50:42.301417  651882 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 13:50:42.303070  651882 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:50:42.304544  651882 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 13:50:42.304588  651882 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1124 13:50:42.304613  651882 cache.go:65] Caching tarball of preloaded images
	I1124 13:50:42.304644  651882 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:50:42.304785  651882 preload.go:238] Found /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1124 13:50:42.304838  651882 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1124 13:50:42.305081  651882 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/newest-cni-846862/config.json ...
	I1124 13:50:42.326535  651882 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 13:50:42.326557  651882 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 13:50:42.326575  651882 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:50:42.326616  651882 start.go:360] acquireMachinesLock for newest-cni-846862: {Name:mkc4689539223e2faafe505852e0d71ad6dc6db7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:50:42.326700  651882 start.go:364] duration metric: took 49.739µs to acquireMachinesLock for "newest-cni-846862"
	I1124 13:50:42.326725  651882 start.go:96] Skipping create...Using existing machine configuration
	I1124 13:50:42.326734  651882 fix.go:54] fixHost starting: 
	I1124 13:50:42.327102  651882 cli_runner.go:164] Run: docker container inspect newest-cni-846862 --format={{.State.Status}}
	I1124 13:50:42.345035  651882 fix.go:112] recreateIfNeeded on newest-cni-846862: state=Stopped err=<nil>
	W1124 13:50:42.345075  651882 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 13:50:40.355827  648989 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-355661:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.222668556s)
	I1124 13:50:40.355867  648989 kic.go:203] duration metric: took 5.222872112s to extract preloaded images to volume ...
	W1124 13:50:40.355996  648989 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 13:50:40.356041  648989 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 13:50:40.356095  648989 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:50:40.431653  648989 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-355661 --name auto-355661 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-355661 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-355661 --network auto-355661 --ip 192.168.76.2 --volume auto-355661:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:50:40.831144  648989 cli_runner.go:164] Run: docker container inspect auto-355661 --format={{.State.Running}}
	I1124 13:50:40.853554  648989 cli_runner.go:164] Run: docker container inspect auto-355661 --format={{.State.Status}}
	I1124 13:50:40.875315  648989 cli_runner.go:164] Run: docker exec auto-355661 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:50:40.932447  648989 oci.go:144] the created container "auto-355661" has a running status.
	I1124 13:50:40.932506  648989 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-370498/.minikube/machines/auto-355661/id_rsa...
	I1124 13:50:40.954660  648989 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-370498/.minikube/machines/auto-355661/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:50:40.983051  648989 cli_runner.go:164] Run: docker container inspect auto-355661 --format={{.State.Status}}
	I1124 13:50:41.009900  648989 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:50:41.009938  648989 kic_runner.go:114] Args: [docker exec --privileged auto-355661 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:50:41.064404  648989 cli_runner.go:164] Run: docker container inspect auto-355661 --format={{.State.Status}}
	I1124 13:50:41.087863  648989 machine.go:94] provisionDockerMachine start ...
	I1124 13:50:41.088063  648989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-355661
	I1124 13:50:41.110771  648989 main.go:143] libmachine: Using SSH client type: native
	I1124 13:50:41.111075  648989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1124 13:50:41.111098  648989 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:50:41.111845  648989 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40402->127.0.0.1:33471: read: connection reset by peer
	I1124 13:50:44.261525  648989 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-355661
	
	I1124 13:50:44.261562  648989 ubuntu.go:182] provisioning hostname "auto-355661"
	I1124 13:50:44.261637  648989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-355661
	I1124 13:50:44.280595  648989 main.go:143] libmachine: Using SSH client type: native
	I1124 13:50:44.280951  648989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1124 13:50:44.280972  648989 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-355661 && echo "auto-355661" | sudo tee /etc/hostname
	W1124 13:50:43.506495  639073 node_ready.go:57] node "default-k8s-diff-port-403602" has "Ready":"False" status (will retry)
	W1124 13:50:45.506950  639073 node_ready.go:57] node "default-k8s-diff-port-403602" has "Ready":"False" status (will retry)
	I1124 13:50:44.440279  648989 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-355661
	
	I1124 13:50:44.440363  648989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-355661
	I1124 13:50:44.460139  648989 main.go:143] libmachine: Using SSH client type: native
	I1124 13:50:44.460390  648989 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1124 13:50:44.460408  648989 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-355661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-355661/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-355661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:50:44.609344  648989 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:50:44.609380  648989 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-370498/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-370498/.minikube}
	I1124 13:50:44.609407  648989 ubuntu.go:190] setting up certificates
	I1124 13:50:44.609436  648989 provision.go:84] configureAuth start
	I1124 13:50:44.609504  648989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-355661
	I1124 13:50:44.628605  648989 provision.go:143] copyHostCerts
	I1124 13:50:44.628692  648989 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem, removing ...
	I1124 13:50:44.628709  648989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem
	I1124 13:50:44.628803  648989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem (1082 bytes)
	I1124 13:50:44.628964  648989 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem, removing ...
	I1124 13:50:44.628978  648989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem
	I1124 13:50:44.629039  648989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem (1123 bytes)
	I1124 13:50:44.629145  648989 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem, removing ...
	I1124 13:50:44.629157  648989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem
	I1124 13:50:44.629214  648989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem (1675 bytes)
	I1124 13:50:44.629311  648989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem org=jenkins.auto-355661 san=[127.0.0.1 192.168.76.2 auto-355661 localhost minikube]
	I1124 13:50:44.783833  648989 provision.go:177] copyRemoteCerts
	I1124 13:50:44.783921  648989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:50:44.783977  648989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-355661
	I1124 13:50:44.802659  648989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/auto-355661/id_rsa Username:docker}
	I1124 13:50:44.909986  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 13:50:44.933221  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1124 13:50:44.954758  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 13:50:44.974785  648989 provision.go:87] duration metric: took 365.326039ms to configureAuth
	I1124 13:50:44.974819  648989 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:50:44.975019  648989 config.go:182] Loaded profile config "auto-355661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:50:44.975036  648989 machine.go:97] duration metric: took 3.887149716s to provisionDockerMachine
	I1124 13:50:44.975045  648989 client.go:176] duration metric: took 10.421636389s to LocalClient.Create
	I1124 13:50:44.975067  648989 start.go:167] duration metric: took 10.421716261s to libmachine.API.Create "auto-355661"
	I1124 13:50:44.975080  648989 start.go:293] postStartSetup for "auto-355661" (driver="docker")
	I1124 13:50:44.975095  648989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:50:44.975156  648989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:50:44.975207  648989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-355661
	I1124 13:50:44.993898  648989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/auto-355661/id_rsa Username:docker}
	I1124 13:50:45.100724  648989 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:50:45.105028  648989 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:50:45.105060  648989 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:50:45.105073  648989 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/addons for local assets ...
	I1124 13:50:45.105136  648989 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/files for local assets ...
	I1124 13:50:45.105251  648989 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem -> 3741222.pem in /etc/ssl/certs
	I1124 13:50:45.105394  648989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 13:50:45.114344  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:50:45.137975  648989 start.go:296] duration metric: took 162.873069ms for postStartSetup
	I1124 13:50:45.138393  648989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-355661
	I1124 13:50:45.158491  648989 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/config.json ...
	I1124 13:50:45.158764  648989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:50:45.158827  648989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-355661
	I1124 13:50:45.178143  648989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/auto-355661/id_rsa Username:docker}
	I1124 13:50:45.279727  648989 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:50:45.284888  648989 start.go:128] duration metric: took 10.734784311s to createHost
	I1124 13:50:45.284931  648989 start.go:83] releasing machines lock for "auto-355661", held for 10.73507567s
	I1124 13:50:45.285021  648989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-355661
	I1124 13:50:45.303787  648989 ssh_runner.go:195] Run: cat /version.json
	I1124 13:50:45.303838  648989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-355661
	I1124 13:50:45.303889  648989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:50:45.304034  648989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-355661
	I1124 13:50:45.323852  648989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/auto-355661/id_rsa Username:docker}
	I1124 13:50:45.324019  648989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/auto-355661/id_rsa Username:docker}
	I1124 13:50:45.425014  648989 ssh_runner.go:195] Run: systemctl --version
	I1124 13:50:45.485705  648989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:50:45.491273  648989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:50:45.491339  648989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:50:45.521276  648989 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 13:50:45.521303  648989 start.go:496] detecting cgroup driver to use...
	I1124 13:50:45.521335  648989 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 13:50:45.521382  648989 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 13:50:45.537220  648989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 13:50:45.553404  648989 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:50:45.553465  648989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:50:45.571827  648989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:50:45.591089  648989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:50:45.680282  648989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:50:45.777471  648989 docker.go:234] disabling docker service ...
	I1124 13:50:45.777536  648989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:50:45.799305  648989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:50:45.814296  648989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:50:45.910408  648989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:50:46.009028  648989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:50:46.023793  648989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:50:46.040070  648989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 13:50:46.052732  648989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 13:50:46.063321  648989 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 13:50:46.063398  648989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 13:50:46.074310  648989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:50:46.084449  648989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 13:50:46.094858  648989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:50:46.105290  648989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:50:46.115329  648989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 13:50:46.125424  648989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 13:50:46.135630  648989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 13:50:46.146962  648989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:50:46.156057  648989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:50:46.165638  648989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:50:46.251435  648989 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 13:50:46.360319  648989 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 13:50:46.360396  648989 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 13:50:46.365012  648989 start.go:564] Will wait 60s for crictl version
	I1124 13:50:46.365082  648989 ssh_runner.go:195] Run: which crictl
	I1124 13:50:46.369353  648989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:50:46.399383  648989 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 13:50:46.399448  648989 ssh_runner.go:195] Run: containerd --version
	I1124 13:50:46.422950  648989 ssh_runner.go:195] Run: containerd --version
	I1124 13:50:46.449391  648989 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 13:50:42.347095  651882 out.go:252] * Restarting existing docker container for "newest-cni-846862" ...
	I1124 13:50:42.347196  651882 cli_runner.go:164] Run: docker start newest-cni-846862
	I1124 13:50:42.679126  651882 cli_runner.go:164] Run: docker container inspect newest-cni-846862 --format={{.State.Status}}
	I1124 13:50:42.699074  651882 kic.go:430] container "newest-cni-846862" state is running.
	I1124 13:50:42.699651  651882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-846862
	I1124 13:50:42.718209  651882 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/newest-cni-846862/config.json ...
	I1124 13:50:42.718521  651882 machine.go:94] provisionDockerMachine start ...
	I1124 13:50:42.718634  651882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-846862
	I1124 13:50:42.737521  651882 main.go:143] libmachine: Using SSH client type: native
	I1124 13:50:42.737828  651882 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33476 <nil> <nil>}
	I1124 13:50:42.737840  651882 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:50:42.738494  651882 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48808->127.0.0.1:33476: read: connection reset by peer
	I1124 13:50:45.890573  651882 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-846862
	
	I1124 13:50:45.890609  651882 ubuntu.go:182] provisioning hostname "newest-cni-846862"
	I1124 13:50:45.890679  651882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-846862
	I1124 13:50:45.910179  651882 main.go:143] libmachine: Using SSH client type: native
	I1124 13:50:45.910490  651882 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33476 <nil> <nil>}
	I1124 13:50:45.910511  651882 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-846862 && echo "newest-cni-846862" | sudo tee /etc/hostname
	I1124 13:50:46.073585  651882 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-846862
	
	I1124 13:50:46.073669  651882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-846862
	I1124 13:50:46.093717  651882 main.go:143] libmachine: Using SSH client type: native
	I1124 13:50:46.094049  651882 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33476 <nil> <nil>}
	I1124 13:50:46.094072  651882 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-846862' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-846862/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-846862' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:50:46.248634  651882 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:50:46.248666  651882 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-370498/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-370498/.minikube}
	I1124 13:50:46.248728  651882 ubuntu.go:190] setting up certificates
	I1124 13:50:46.248768  651882 provision.go:84] configureAuth start
	I1124 13:50:46.248849  651882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-846862
	I1124 13:50:46.270511  651882 provision.go:143] copyHostCerts
	I1124 13:50:46.270566  651882 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem, removing ...
	I1124 13:50:46.270584  651882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem
	I1124 13:50:46.270643  651882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/ca.pem (1082 bytes)
	I1124 13:50:46.270761  651882 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem, removing ...
	I1124 13:50:46.270773  651882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem
	I1124 13:50:46.270802  651882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/cert.pem (1123 bytes)
	I1124 13:50:46.270878  651882 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem, removing ...
	I1124 13:50:46.270890  651882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem
	I1124 13:50:46.270932  651882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-370498/.minikube/key.pem (1675 bytes)
	I1124 13:50:46.271050  651882 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem org=jenkins.newest-cni-846862 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-846862]
	I1124 13:50:46.375825  651882 provision.go:177] copyRemoteCerts
	I1124 13:50:46.375885  651882 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:50:46.375933  651882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-846862
	I1124 13:50:46.397552  651882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/newest-cni-846862/id_rsa Username:docker}
	I1124 13:50:46.503932  651882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 13:50:46.525588  651882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 13:50:46.547966  651882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 13:50:46.570387  651882 provision.go:87] duration metric: took 321.59766ms to configureAuth
	I1124 13:50:46.570418  651882 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:50:46.570668  651882 config.go:182] Loaded profile config "newest-cni-846862": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:50:46.570686  651882 machine.go:97] duration metric: took 3.852145082s to provisionDockerMachine
	I1124 13:50:46.570696  651882 start.go:293] postStartSetup for "newest-cni-846862" (driver="docker")
	I1124 13:50:46.570708  651882 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:50:46.570770  651882 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:50:46.570818  651882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-846862
	I1124 13:50:46.591385  651882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/newest-cni-846862/id_rsa Username:docker}
	I1124 13:50:46.702143  651882 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:50:46.706362  651882 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:50:46.706395  651882 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:50:46.706409  651882 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/addons for local assets ...
	I1124 13:50:46.706460  651882 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-370498/.minikube/files for local assets ...
	I1124 13:50:46.706546  651882 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem -> 3741222.pem in /etc/ssl/certs
	I1124 13:50:46.706654  651882 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 13:50:46.715261  651882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:50:46.735125  651882 start.go:296] duration metric: took 164.410202ms for postStartSetup
	I1124 13:50:46.735232  651882 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:50:46.735275  651882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-846862
	I1124 13:50:46.755027  651882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/newest-cni-846862/id_rsa Username:docker}
	I1124 13:50:46.858979  651882 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:50:46.864572  651882 fix.go:56] duration metric: took 4.537829174s for fixHost
	I1124 13:50:46.864599  651882 start.go:83] releasing machines lock for "newest-cni-846862", held for 4.537885231s
	I1124 13:50:46.864673  651882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-846862
	I1124 13:50:46.884027  651882 ssh_runner.go:195] Run: cat /version.json
	I1124 13:50:46.884089  651882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-846862
	I1124 13:50:46.884121  651882 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:50:46.884214  651882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-846862
	I1124 13:50:46.903156  651882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/newest-cni-846862/id_rsa Username:docker}
	I1124 13:50:46.904889  651882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/newest-cni-846862/id_rsa Username:docker}
	I1124 13:50:47.065543  651882 ssh_runner.go:195] Run: systemctl --version
	I1124 13:50:47.072862  651882 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:50:47.078127  651882 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:50:47.078202  651882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:50:47.087519  651882 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 13:50:47.087548  651882 start.go:496] detecting cgroup driver to use...
	I1124 13:50:47.087586  651882 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 13:50:47.087641  651882 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 13:50:47.106491  651882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 13:50:47.121750  651882 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:50:47.121822  651882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:50:46.451253  648989 cli_runner.go:164] Run: docker network inspect auto-355661 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:50:46.470209  648989 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 13:50:46.474806  648989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:50:46.486132  648989 kubeadm.go:884] updating cluster {Name:auto-355661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-355661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:50:46.486269  648989 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 13:50:46.486339  648989 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:50:46.516304  648989 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 13:50:46.516326  648989 containerd.go:534] Images already preloaded, skipping extraction
	I1124 13:50:46.516391  648989 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:50:46.545597  648989 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 13:50:46.545624  648989 cache_images.go:86] Images are preloaded, skipping loading
	I1124 13:50:46.545633  648989 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1124 13:50:46.545752  648989 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-355661 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-355661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:50:46.545820  648989 ssh_runner.go:195] Run: sudo crictl info
	I1124 13:50:46.575441  648989 cni.go:84] Creating CNI manager for ""
	I1124 13:50:46.575469  648989 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:50:46.575490  648989 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:50:46.575520  648989 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-355661 NodeName:auto-355661 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:50:46.575721  648989 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "auto-355661"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:50:46.575815  648989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 13:50:46.585434  648989 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 13:50:46.585527  648989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:50:46.595489  648989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1124 13:50:46.612254  648989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:50:46.630217  648989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1124 13:50:46.644676  648989 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:50:46.649232  648989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:50:46.660397  648989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:50:46.753664  648989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:50:46.781806  648989 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661 for IP: 192.168.76.2
	I1124 13:50:46.781834  648989 certs.go:195] generating shared ca certs ...
	I1124 13:50:46.781862  648989 certs.go:227] acquiring lock for ca certs: {Name:mk5874497fda855b1e2ff816147ffdfbc44946ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:50:46.782080  648989 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.key
	I1124 13:50:46.782136  648989 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.key
	I1124 13:50:46.782147  648989 certs.go:257] generating profile certs ...
	I1124 13:50:46.782214  648989 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/client.key
	I1124 13:50:46.782233  648989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/client.crt with IP's: []
	I1124 13:50:46.864531  648989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/client.crt ...
	I1124 13:50:46.864566  648989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/client.crt: {Name:mkd3e24059f35a20f49945b99b9b69f5ef4934a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:50:46.864789  648989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/client.key ...
	I1124 13:50:46.864809  648989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/client.key: {Name:mkf5018446504d7dd904b2d9011f49a8094fea41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:50:46.864955  648989 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.key.94ea11ee
	I1124 13:50:46.864979  648989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.crt.94ea11ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 13:50:46.884694  648989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.crt.94ea11ee ...
	I1124 13:50:46.884728  648989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.crt.94ea11ee: {Name:mk16141a12725a6fcc597223a338945a69dfc26f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:50:46.884936  648989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.key.94ea11ee ...
	I1124 13:50:46.884955  648989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.key.94ea11ee: {Name:mkace1ece8ca39df04e578660ab45b136cea4475 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:50:46.885067  648989 certs.go:382] copying /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.crt.94ea11ee -> /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.crt
	I1124 13:50:46.885185  648989 certs.go:386] copying /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.key.94ea11ee -> /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.key
	I1124 13:50:46.885273  648989 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/proxy-client.key
	I1124 13:50:46.885294  648989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/proxy-client.crt with IP's: []
	I1124 13:50:46.930841  648989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/proxy-client.crt ...
	I1124 13:50:46.930869  648989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/proxy-client.crt: {Name:mkb43906cc9d74e972dbbd959e1e8bed60339465 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:50:46.931108  648989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/proxy-client.key ...
	I1124 13:50:46.931134  648989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/proxy-client.key: {Name:mk10f369f3ab4d5388d6a45f0e5b41235040ba94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:50:46.931410  648989 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122.pem (1338 bytes)
	W1124 13:50:46.931456  648989 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122_empty.pem, impossibly tiny 0 bytes
	I1124 13:50:46.931466  648989 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 13:50:46.931492  648989 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem (1082 bytes)
	I1124 13:50:46.931517  648989 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:50:46.931541  648989 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem (1675 bytes)
	I1124 13:50:46.931581  648989 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:50:46.932310  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:50:46.954721  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 13:50:46.974887  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:50:46.994308  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 13:50:47.015895  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1124 13:50:47.037240  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 13:50:47.058093  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:50:47.079810  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/auto-355661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 13:50:47.099802  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122.pem --> /usr/share/ca-certificates/374122.pem (1338 bytes)
	I1124 13:50:47.124855  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /usr/share/ca-certificates/3741222.pem (1708 bytes)
	I1124 13:50:47.144975  648989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:50:47.168266  648989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:50:47.186289  648989 ssh_runner.go:195] Run: openssl version
	I1124 13:50:47.196342  648989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/374122.pem && ln -fs /usr/share/ca-certificates/374122.pem /etc/ssl/certs/374122.pem"
	I1124 13:50:47.206522  648989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/374122.pem
	I1124 13:50:47.211167  648989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:20 /usr/share/ca-certificates/374122.pem
	I1124 13:50:47.211236  648989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/374122.pem
	I1124 13:50:47.249619  648989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/374122.pem /etc/ssl/certs/51391683.0"
	I1124 13:50:47.259905  648989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3741222.pem && ln -fs /usr/share/ca-certificates/3741222.pem /etc/ssl/certs/3741222.pem"
	I1124 13:50:47.273689  648989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3741222.pem
	I1124 13:50:47.279800  648989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:20 /usr/share/ca-certificates/3741222.pem
	I1124 13:50:47.279872  648989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3741222.pem
	I1124 13:50:47.316446  648989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3741222.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 13:50:47.326770  648989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:50:47.336300  648989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:50:47.341033  648989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:50:47.341102  648989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:50:47.383825  648989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:50:47.398335  648989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:50:47.403148  648989 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:50:47.403203  648989 kubeadm.go:401] StartCluster: {Name:auto-355661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-355661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:50:47.403296  648989 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 13:50:47.403357  648989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:50:47.436111  648989 cri.go:89] found id: ""
	I1124 13:50:47.436210  648989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:50:47.446133  648989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:50:47.455407  648989 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 13:50:47.455486  648989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:50:47.464355  648989 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:50:47.464377  648989 kubeadm.go:158] found existing configuration files:
	
	I1124 13:50:47.464440  648989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:50:47.473268  648989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:50:47.473348  648989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:50:47.485942  648989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:50:47.496775  648989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:50:47.496837  648989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:50:47.506132  648989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:50:47.515713  648989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:50:47.515793  648989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:50:47.524588  648989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:50:47.533827  648989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:50:47.534026  648989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:50:47.543012  648989 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 13:50:47.589659  648989 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 13:50:47.589709  648989 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:50:47.624899  648989 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 13:50:47.625073  648989 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 13:50:47.625132  648989 kubeadm.go:319] OS: Linux
	I1124 13:50:47.625196  648989 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 13:50:47.625265  648989 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 13:50:47.625332  648989 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 13:50:47.625405  648989 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 13:50:47.625502  648989 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 13:50:47.625586  648989 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 13:50:47.625648  648989 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 13:50:47.625709  648989 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 13:50:47.696420  648989 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:50:47.696568  648989 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:50:47.696736  648989 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 13:50:47.703161  648989 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:50:47.138526  651882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:50:47.153295  651882 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:50:47.242319  651882 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:50:47.324562  651882 docker.go:234] disabling docker service ...
	I1124 13:50:47.324637  651882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:50:47.341049  651882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:50:47.356938  651882 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:50:47.452582  651882 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:50:47.544177  651882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:50:47.560293  651882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:50:47.577441  651882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 13:50:47.588652  651882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 13:50:47.600054  651882 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1124 13:50:47.600132  651882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1124 13:50:47.611886  651882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:50:47.623205  651882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 13:50:47.636094  651882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:50:47.647489  651882 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:50:47.658581  651882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 13:50:47.669878  651882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 13:50:47.680767  651882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 13:50:47.691087  651882 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:50:47.701276  651882 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:50:47.711355  651882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:50:47.796448  651882 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 13:50:47.920418  651882 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 13:50:47.920490  651882 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 13:50:47.925233  651882 start.go:564] Will wait 60s for crictl version
	I1124 13:50:47.925295  651882 ssh_runner.go:195] Run: which crictl
	I1124 13:50:47.929614  651882 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:50:47.959230  651882 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 13:50:47.959305  651882 ssh_runner.go:195] Run: containerd --version
	I1124 13:50:47.983458  651882 ssh_runner.go:195] Run: containerd --version
	I1124 13:50:48.010800  651882 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 13:50:48.012433  651882 cli_runner.go:164] Run: docker network inspect newest-cni-846862 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:50:48.030515  651882 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 13:50:48.035097  651882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:50:48.048936  651882 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1124 13:50:47.706921  648989 out.go:252]   - Generating certificates and keys ...
	I1124 13:50:47.707051  648989 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 13:50:47.707181  648989 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 13:50:47.916939  648989 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 13:50:48.158129  648989 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 13:50:48.665212  648989 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 13:50:48.812585  648989 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 13:50:49.000741  648989 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 13:50:49.001031  648989 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-355661 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 13:50:48.050188  651882 kubeadm.go:884] updating cluster {Name:newest-cni-846862 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-846862 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:50:48.050858  651882 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 13:50:48.051048  651882 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:50:48.082473  651882 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 13:50:48.082501  651882 containerd.go:534] Images already preloaded, skipping extraction
	I1124 13:50:48.082571  651882 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:50:48.111345  651882 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 13:50:48.111374  651882 cache_images.go:86] Images are preloaded, skipping loading
	I1124 13:50:48.111384  651882 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1124 13:50:48.111505  651882 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-846862 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-846862 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:50:48.111603  651882 ssh_runner.go:195] Run: sudo crictl info
	I1124 13:50:48.140114  651882 cni.go:84] Creating CNI manager for ""
	I1124 13:50:48.140142  651882 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:50:48.140158  651882 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1124 13:50:48.140203  651882 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-846862 NodeName:newest-cni-846862 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:50:48.140354  651882 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-846862"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:50:48.140429  651882 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 13:50:48.149304  651882 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 13:50:48.149369  651882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:50:48.157855  651882 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1124 13:50:48.172840  651882 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:50:48.187630  651882 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1124 13:50:48.202924  651882 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:50:48.207275  651882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:50:48.218831  651882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:50:48.303602  651882 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:50:48.331115  651882 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/newest-cni-846862 for IP: 192.168.85.2
	I1124 13:50:48.331147  651882 certs.go:195] generating shared ca certs ...
	I1124 13:50:48.331171  651882 certs.go:227] acquiring lock for ca certs: {Name:mk5874497fda855b1e2ff816147ffdfbc44946ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:50:48.331343  651882 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.key
	I1124 13:50:48.331385  651882 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.key
	I1124 13:50:48.331395  651882 certs.go:257] generating profile certs ...
	I1124 13:50:48.331480  651882 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/newest-cni-846862/client.key
	I1124 13:50:48.331537  651882 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/newest-cni-846862/apiserver.key.f3638c55
	I1124 13:50:48.331571  651882 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/newest-cni-846862/proxy-client.key
	I1124 13:50:48.331676  651882 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122.pem (1338 bytes)
	W1124 13:50:48.331707  651882 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122_empty.pem, impossibly tiny 0 bytes
	I1124 13:50:48.331717  651882 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 13:50:48.331745  651882 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem (1082 bytes)
	I1124 13:50:48.331770  651882 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:50:48.331796  651882 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/certs/key.pem (1675 bytes)
	I1124 13:50:48.331837  651882 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem (1708 bytes)
	I1124 13:50:48.332603  651882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:50:48.355199  651882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 13:50:48.376853  651882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:50:48.400999  651882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 13:50:48.430437  651882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/newest-cni-846862/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 13:50:48.460628  651882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/newest-cni-846862/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 13:50:48.485767  651882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/newest-cni-846862/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:50:48.508647  651882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/newest-cni-846862/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 13:50:48.533328  651882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/ssl/certs/3741222.pem --> /usr/share/ca-certificates/3741222.pem (1708 bytes)
	I1124 13:50:48.560453  651882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:50:48.586139  651882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-370498/.minikube/certs/374122.pem --> /usr/share/ca-certificates/374122.pem (1338 bytes)
	I1124 13:50:48.608671  651882 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:50:48.624705  651882 ssh_runner.go:195] Run: openssl version
	I1124 13:50:48.634364  651882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/374122.pem && ln -fs /usr/share/ca-certificates/374122.pem /etc/ssl/certs/374122.pem"
	I1124 13:50:48.646770  651882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/374122.pem
	I1124 13:50:48.652663  651882 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:20 /usr/share/ca-certificates/374122.pem
	I1124 13:50:48.652903  651882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/374122.pem
	I1124 13:50:48.701672  651882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/374122.pem /etc/ssl/certs/51391683.0"
	I1124 13:50:48.715411  651882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3741222.pem && ln -fs /usr/share/ca-certificates/3741222.pem /etc/ssl/certs/3741222.pem"
	I1124 13:50:48.727197  651882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3741222.pem
	I1124 13:50:48.732022  651882 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:20 /usr/share/ca-certificates/3741222.pem
	I1124 13:50:48.732095  651882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3741222.pem
	I1124 13:50:48.773025  651882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3741222.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 13:50:48.782409  651882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:50:48.792503  651882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:50:48.798728  651882 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:50:48.798830  651882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:50:48.838088  651882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:50:48.848340  651882 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:50:48.852989  651882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 13:50:48.912455  651882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 13:50:48.985127  651882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 13:50:49.079392  651882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 13:50:49.155701  651882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 13:50:49.234432  651882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 13:50:49.334489  651882 kubeadm.go:401] StartCluster: {Name:newest-cni-846862 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-846862 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:50:49.334613  651882 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 13:50:49.334701  651882 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:50:49.414948  651882 cri.go:89] found id: "ae5ee49018aa6033ba48991dad406d9b5c0deff8967430f442707be62cc24113"
	I1124 13:50:49.414977  651882 cri.go:89] found id: "08725a07d28d819186f073f476f1f074a9d1be60d2aaa4e76ac8c0a1a745e5c1"
	I1124 13:50:49.414984  651882 cri.go:89] found id: "ad36afcb219be039f5e7330d020ac3ebbd9611dfe42e2558a66b227d7b652ee6"
	I1124 13:50:49.414988  651882 cri.go:89] found id: "12dc8da2d3d39b524d872c9e91350db979d1e9ff9977b5d71b22c2d5d732ed02"
	I1124 13:50:49.414993  651882 cri.go:89] found id: "0438c6289f014c867a2c3859c6af483dd174332dd3424c10931803ebf51f4079"
	I1124 13:50:49.414997  651882 cri.go:89] found id: "55cde7af45d40e8d674bec9f80210fd5823f82d4d8c33474c5dacf7ba2001b0e"
	I1124 13:50:49.415002  651882 cri.go:89] found id: "01146adee0ab24a6fb08a143c05b008bed5e6145ff59663cae9815ba2fc29b75"
	I1124 13:50:49.415006  651882 cri.go:89] found id: "c85477776bb9f4ec299d558bcfcada5d0adfd35fce0f0773b40887e3d8f84ecf"
	I1124 13:50:49.415010  651882 cri.go:89] found id: "8e4403c4035ade85a52e8cd2ed2ce64f6987067291f5860adc54eb7a84e60487"
	I1124 13:50:49.415019  651882 cri.go:89] found id: "6a650d9b87f78a0f5adbddd47846034b53a8ad49044ccd48ef44dc8ffaaa7607"
	I1124 13:50:49.415025  651882 cri.go:89] found id: ""
	I1124 13:50:49.415077  651882 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1124 13:50:49.447704  651882 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"00c573bcf2d54c254e24e7dfa9deeb4a3cf7be64da83e28f21a9da85f5cbc65d","pid":863,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/00c573bcf2d54c254e24e7dfa9deeb4a3cf7be64da83e28f21a9da85f5cbc65d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/00c573bcf2d54c254e24e7dfa9deeb4a3cf7be64da83e28f21a9da85f5cbc65d/rootfs","created":"2025-11-24T13:50:49.008592525Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"00c573bcf2d54c254e24e7dfa9deeb4a3cf7be64da83e28f21a9da85f5cbc65d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-846862_a88d472357f8b1a666cc709241ae6394","io.kubernetes.cri.sandbox-memor
y":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-846862","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a88d472357f8b1a666cc709241ae6394"},"owner":"root"},{"ociVersion":"1.2.1","id":"08725a07d28d819186f073f476f1f074a9d1be60d2aaa4e76ac8c0a1a745e5c1","pid":991,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08725a07d28d819186f073f476f1f074a9d1be60d2aaa4e76ac8c0a1a745e5c1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08725a07d28d819186f073f476f1f074a9d1be60d2aaa4e76ac8c0a1a745e5c1/rootfs","created":"2025-11-24T13:50:49.323048029Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"00c573bcf2d54c254e24e7dfa9deeb4a3cf7be64da83e28f21a9da85f5cbc65d","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-846862","io.kubernetes.cri.sandbox-nam
espace":"kube-system","io.kubernetes.cri.sandbox-uid":"a88d472357f8b1a666cc709241ae6394"},"owner":"root"},{"ociVersion":"1.2.1","id":"12dc8da2d3d39b524d872c9e91350db979d1e9ff9977b5d71b22c2d5d732ed02","pid":917,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12dc8da2d3d39b524d872c9e91350db979d1e9ff9977b5d71b22c2d5d732ed02","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12dc8da2d3d39b524d872c9e91350db979d1e9ff9977b5d71b22c2d5d732ed02/rootfs","created":"2025-11-24T13:50:49.13281136Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"534e9e0756b35791f696e82980081e37f1dfe4059c22961b3d1c21d418cf6fd5","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-846862","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f23e44d567420666e31573502b5c1d8d"},"owner":"root"},{"ociVersion":"1.2.1","id":"53
4e9e0756b35791f696e82980081e37f1dfe4059c22961b3d1c21d418cf6fd5","pid":775,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/534e9e0756b35791f696e82980081e37f1dfe4059c22961b3d1c21d418cf6fd5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/534e9e0756b35791f696e82980081e37f1dfe4059c22961b3d1c21d418cf6fd5/rootfs","created":"2025-11-24T13:50:48.945010593Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"534e9e0756b35791f696e82980081e37f1dfe4059c22961b3d1c21d418cf6fd5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-846862_f23e44d567420666e31573502b5c1d8d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-846862","io.kubernetes.cri.sandbox-namespa
ce":"kube-system","io.kubernetes.cri.sandbox-uid":"f23e44d567420666e31573502b5c1d8d"},"owner":"root"},{"ociVersion":"1.2.1","id":"ad36afcb219be039f5e7330d020ac3ebbd9611dfe42e2558a66b227d7b652ee6","pid":954,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad36afcb219be039f5e7330d020ac3ebbd9611dfe42e2558a66b227d7b652ee6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad36afcb219be039f5e7330d020ac3ebbd9611dfe42e2558a66b227d7b652ee6/rootfs","created":"2025-11-24T13:50:49.243810574Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"e48f218d08352c1db43874fec99e833d328e414bf1903fd11c000f6b0272b170","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-846862","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9dcf058e6212ad1ea6d57bf551d
87aa4"},"owner":"root"},{"ociVersion":"1.2.1","id":"ae5ee49018aa6033ba48991dad406d9b5c0deff8967430f442707be62cc24113","pid":974,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae5ee49018aa6033ba48991dad406d9b5c0deff8967430f442707be62cc24113","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae5ee49018aa6033ba48991dad406d9b5c0deff8967430f442707be62cc24113/rootfs","created":"2025-11-24T13:50:49.311206502Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"ba8898476255feed6f7f2d353c42d033dca350c5b3ae0d20155abeb786ec6fa0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-846862","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9ea43494afebda609b4590e6cd6dac8c"},"owner":"root"},{"ociVersion":"1.2.1","id":"ba8898476255feed6f7f2d353c42d033dca350c5b3ae0d20155ab
eb786ec6fa0","pid":881,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba8898476255feed6f7f2d353c42d033dca350c5b3ae0d20155abeb786ec6fa0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba8898476255feed6f7f2d353c42d033dca350c5b3ae0d20155abeb786ec6fa0/rootfs","created":"2025-11-24T13:50:49.040022269Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"ba8898476255feed6f7f2d353c42d033dca350c5b3ae0d20155abeb786ec6fa0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-846862_9ea43494afebda609b4590e6cd6dac8c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-846862","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernete
s.cri.sandbox-uid":"9ea43494afebda609b4590e6cd6dac8c"},"owner":"root"},{"ociVersion":"1.2.1","id":"e48f218d08352c1db43874fec99e833d328e414bf1903fd11c000f6b0272b170","pid":846,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e48f218d08352c1db43874fec99e833d328e414bf1903fd11c000f6b0272b170","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e48f218d08352c1db43874fec99e833d328e414bf1903fd11c000f6b0272b170/rootfs","created":"2025-11-24T13:50:48.992313705Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"e48f218d08352c1db43874fec99e833d328e414bf1903fd11c000f6b0272b170","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-846862_9dcf058e6212ad1ea6d57bf551d87aa4","io.kuberne
tes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-846862","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9dcf058e6212ad1ea6d57bf551d87aa4"},"owner":"root"}]
	I1124 13:50:49.447901  651882 cri.go:126] list returned 8 containers
	I1124 13:50:49.447929  651882 cri.go:129] container: {ID:00c573bcf2d54c254e24e7dfa9deeb4a3cf7be64da83e28f21a9da85f5cbc65d Status:running}
	I1124 13:50:49.447958  651882 cri.go:131] skipping 00c573bcf2d54c254e24e7dfa9deeb4a3cf7be64da83e28f21a9da85f5cbc65d - not in ps
	I1124 13:50:49.447969  651882 cri.go:129] container: {ID:08725a07d28d819186f073f476f1f074a9d1be60d2aaa4e76ac8c0a1a745e5c1 Status:running}
	I1124 13:50:49.447987  651882 cri.go:135] skipping {08725a07d28d819186f073f476f1f074a9d1be60d2aaa4e76ac8c0a1a745e5c1 running}: state = "running", want "paused"
	I1124 13:50:49.448011  651882 cri.go:129] container: {ID:12dc8da2d3d39b524d872c9e91350db979d1e9ff9977b5d71b22c2d5d732ed02 Status:running}
	I1124 13:50:49.448018  651882 cri.go:135] skipping {12dc8da2d3d39b524d872c9e91350db979d1e9ff9977b5d71b22c2d5d732ed02 running}: state = "running", want "paused"
	I1124 13:50:49.448026  651882 cri.go:129] container: {ID:534e9e0756b35791f696e82980081e37f1dfe4059c22961b3d1c21d418cf6fd5 Status:running}
	I1124 13:50:49.448034  651882 cri.go:131] skipping 534e9e0756b35791f696e82980081e37f1dfe4059c22961b3d1c21d418cf6fd5 - not in ps
	I1124 13:50:49.448040  651882 cri.go:129] container: {ID:ad36afcb219be039f5e7330d020ac3ebbd9611dfe42e2558a66b227d7b652ee6 Status:running}
	I1124 13:50:49.448052  651882 cri.go:135] skipping {ad36afcb219be039f5e7330d020ac3ebbd9611dfe42e2558a66b227d7b652ee6 running}: state = "running", want "paused"
	I1124 13:50:49.448062  651882 cri.go:129] container: {ID:ae5ee49018aa6033ba48991dad406d9b5c0deff8967430f442707be62cc24113 Status:running}
	I1124 13:50:49.448070  651882 cri.go:135] skipping {ae5ee49018aa6033ba48991dad406d9b5c0deff8967430f442707be62cc24113 running}: state = "running", want "paused"
	I1124 13:50:49.448082  651882 cri.go:129] container: {ID:ba8898476255feed6f7f2d353c42d033dca350c5b3ae0d20155abeb786ec6fa0 Status:running}
	I1124 13:50:49.448095  651882 cri.go:131] skipping ba8898476255feed6f7f2d353c42d033dca350c5b3ae0d20155abeb786ec6fa0 - not in ps
	I1124 13:50:49.448100  651882 cri.go:129] container: {ID:e48f218d08352c1db43874fec99e833d328e414bf1903fd11c000f6b0272b170 Status:running}
	I1124 13:50:49.448105  651882 cri.go:131] skipping e48f218d08352c1db43874fec99e833d328e414bf1903fd11c000f6b0272b170 - not in ps
	I1124 13:50:49.448166  651882 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:50:49.460427  651882 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 13:50:49.460451  651882 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 13:50:49.460507  651882 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 13:50:49.474298  651882 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 13:50:49.475808  651882 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-846862" does not appear in /home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:50:49.477042  651882 kubeconfig.go:62] /home/jenkins/minikube-integration/21932-370498/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-846862" cluster setting kubeconfig missing "newest-cni-846862" context setting]
	I1124 13:50:49.478663  651882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/kubeconfig: {Name:mk44e8f04ffd8592063c19ad1e339ad14aaa66a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:50:49.481555  651882 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 13:50:49.493038  651882 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 13:50:49.493075  651882 kubeadm.go:602] duration metric: took 32.617157ms to restartPrimaryControlPlane
	I1124 13:50:49.493087  651882 kubeadm.go:403] duration metric: took 158.609331ms to StartCluster
	I1124 13:50:49.493106  651882 settings.go:142] acquiring lock: {Name:mka599a3c9bae62ffb84d261186583052ce40f68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:50:49.493179  651882 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:50:49.495334  651882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/kubeconfig: {Name:mk44e8f04ffd8592063c19ad1e339ad14aaa66a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:50:49.495635  651882 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 13:50:49.495847  651882 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 13:50:49.495991  651882 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-846862"
	I1124 13:50:49.496017  651882 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-846862"
	W1124 13:50:49.496030  651882 addons.go:248] addon storage-provisioner should already be in state true
	I1124 13:50:49.496071  651882 host.go:66] Checking if "newest-cni-846862" exists ...
	I1124 13:50:49.496074  651882 config.go:182] Loaded profile config "newest-cni-846862": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:50:49.496135  651882 addons.go:70] Setting default-storageclass=true in profile "newest-cni-846862"
	I1124 13:50:49.496156  651882 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-846862"
	I1124 13:50:49.496554  651882 cli_runner.go:164] Run: docker container inspect newest-cni-846862 --format={{.State.Status}}
	I1124 13:50:49.496621  651882 cli_runner.go:164] Run: docker container inspect newest-cni-846862 --format={{.State.Status}}
	I1124 13:50:49.496784  651882 addons.go:70] Setting dashboard=true in profile "newest-cni-846862"
	I1124 13:50:49.496799  651882 addons.go:70] Setting metrics-server=true in profile "newest-cni-846862"
	I1124 13:50:49.496809  651882 addons.go:239] Setting addon dashboard=true in "newest-cni-846862"
	I1124 13:50:49.496818  651882 addons.go:239] Setting addon metrics-server=true in "newest-cni-846862"
	W1124 13:50:49.496819  651882 addons.go:248] addon dashboard should already be in state true
	W1124 13:50:49.496827  651882 addons.go:248] addon metrics-server should already be in state true
	I1124 13:50:49.496853  651882 host.go:66] Checking if "newest-cni-846862" exists ...
	I1124 13:50:49.496862  651882 host.go:66] Checking if "newest-cni-846862" exists ...
	I1124 13:50:49.497353  651882 cli_runner.go:164] Run: docker container inspect newest-cni-846862 --format={{.State.Status}}
	I1124 13:50:49.497411  651882 cli_runner.go:164] Run: docker container inspect newest-cni-846862 --format={{.State.Status}}
	I1124 13:50:49.499198  651882 out.go:179] * Verifying Kubernetes components...
	I1124 13:50:49.503846  651882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:50:49.530554  651882 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 13:50:49.530629  651882 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1124 13:50:49.532176  651882 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 13:50:49.532199  651882 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 13:50:49.532271  651882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-846862
	I1124 13:50:49.533681  651882 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 13:50:49.534859  651882 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 13:50:49.534876  651882 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 13:50:49.534979  651882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-846862
	I1124 13:50:49.539796  651882 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1124 13:50:47.507436  639073 node_ready.go:57] node "default-k8s-diff-port-403602" has "Ready":"False" status (will retry)
	I1124 13:50:49.009782  639073 node_ready.go:49] node "default-k8s-diff-port-403602" is "Ready"
	I1124 13:50:49.009821  639073 node_ready.go:38] duration metric: took 12.006351303s for node "default-k8s-diff-port-403602" to be "Ready" ...
	I1124 13:50:49.009838  639073 api_server.go:52] waiting for apiserver process to appear ...
	I1124 13:50:49.009896  639073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:50:49.031905  639073 api_server.go:72] duration metric: took 12.583482407s to wait for apiserver process to appear ...
	I1124 13:50:49.031953  639073 api_server.go:88] waiting for apiserver healthz status ...
	I1124 13:50:49.031977  639073 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1124 13:50:49.051734  639073 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1124 13:50:49.052978  639073 api_server.go:141] control plane version: v1.34.1
	I1124 13:50:49.053014  639073 api_server.go:131] duration metric: took 21.052688ms to wait for apiserver health ...
	I1124 13:50:49.053027  639073 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 13:50:49.058703  639073 system_pods.go:59] 8 kube-system pods found
	I1124 13:50:49.058763  639073 system_pods.go:61] "coredns-66bc5c9577-hrj7f" [f86f95a0-9e92-429a-9dd7-76843d8d6af1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:50:49.058776  639073 system_pods.go:61] "etcd-default-k8s-diff-port-403602" [62d9cce7-ae7f-4ca8-8821-bcc444aef365] Running
	I1124 13:50:49.058790  639073 system_pods.go:61] "kindnet-hdcbn" [88d22920-c2fd-4bdf-95ec-c2f4f5c22669] Running
	I1124 13:50:49.058803  639073 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-403602" [935e74ec-fe4d-4a3a-b83d-ff0bf904f0d3] Running
	I1124 13:50:49.058809  639073 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-403602" [67672d0f-6473-46c9-9376-665f7eea8cff] Running
	I1124 13:50:49.058814  639073 system_pods.go:61] "kube-proxy-fhwvd" [a8814197-f505-433e-a55d-b0106f40e505] Running
	I1124 13:50:49.058819  639073 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-403602" [febe5d3d-ddbd-4960-ae25-0db79a14c200] Running
	I1124 13:50:49.058825  639073 system_pods.go:61] "storage-provisioner" [649238f9-bcbc-4569-bff7-9488834e21c8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:50:49.058837  639073 system_pods.go:74] duration metric: took 5.80179ms to wait for pod list to return data ...
	I1124 13:50:49.058846  639073 default_sa.go:34] waiting for default service account to be created ...
	I1124 13:50:49.062993  639073 default_sa.go:45] found service account: "default"
	I1124 13:50:49.063123  639073 default_sa.go:55] duration metric: took 4.265225ms for default service account to be created ...
	I1124 13:50:49.063232  639073 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 13:50:49.069553  639073 system_pods.go:86] 8 kube-system pods found
	I1124 13:50:49.069591  639073 system_pods.go:89] "coredns-66bc5c9577-hrj7f" [f86f95a0-9e92-429a-9dd7-76843d8d6af1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:50:49.069599  639073 system_pods.go:89] "etcd-default-k8s-diff-port-403602" [62d9cce7-ae7f-4ca8-8821-bcc444aef365] Running
	I1124 13:50:49.069614  639073 system_pods.go:89] "kindnet-hdcbn" [88d22920-c2fd-4bdf-95ec-c2f4f5c22669] Running
	I1124 13:50:49.069620  639073 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-403602" [935e74ec-fe4d-4a3a-b83d-ff0bf904f0d3] Running
	I1124 13:50:49.069625  639073 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-403602" [67672d0f-6473-46c9-9376-665f7eea8cff] Running
	I1124 13:50:49.069630  639073 system_pods.go:89] "kube-proxy-fhwvd" [a8814197-f505-433e-a55d-b0106f40e505] Running
	I1124 13:50:49.069636  639073 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-403602" [febe5d3d-ddbd-4960-ae25-0db79a14c200] Running
	I1124 13:50:49.069642  639073 system_pods.go:89] "storage-provisioner" [649238f9-bcbc-4569-bff7-9488834e21c8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:50:49.069671  639073 retry.go:31] will retry after 244.846537ms: missing components: kube-dns
	I1124 13:50:49.323541  639073 system_pods.go:86] 8 kube-system pods found
	I1124 13:50:49.323594  639073 system_pods.go:89] "coredns-66bc5c9577-hrj7f" [f86f95a0-9e92-429a-9dd7-76843d8d6af1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:50:49.323606  639073 system_pods.go:89] "etcd-default-k8s-diff-port-403602" [62d9cce7-ae7f-4ca8-8821-bcc444aef365] Running
	I1124 13:50:49.323617  639073 system_pods.go:89] "kindnet-hdcbn" [88d22920-c2fd-4bdf-95ec-c2f4f5c22669] Running
	I1124 13:50:49.323625  639073 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-403602" [935e74ec-fe4d-4a3a-b83d-ff0bf904f0d3] Running
	I1124 13:50:49.323633  639073 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-403602" [67672d0f-6473-46c9-9376-665f7eea8cff] Running
	I1124 13:50:49.325828  639073 system_pods.go:89] "kube-proxy-fhwvd" [a8814197-f505-433e-a55d-b0106f40e505] Running
	I1124 13:50:49.325938  639073 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-403602" [febe5d3d-ddbd-4960-ae25-0db79a14c200] Running
	I1124 13:50:49.325987  639073 system_pods.go:89] "storage-provisioner" [649238f9-bcbc-4569-bff7-9488834e21c8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:50:49.326017  639073 retry.go:31] will retry after 364.207486ms: missing components: kube-dns
	I1124 13:50:49.702701  639073 system_pods.go:86] 8 kube-system pods found
	I1124 13:50:49.702781  639073 system_pods.go:89] "coredns-66bc5c9577-hrj7f" [f86f95a0-9e92-429a-9dd7-76843d8d6af1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:50:49.702800  639073 system_pods.go:89] "etcd-default-k8s-diff-port-403602" [62d9cce7-ae7f-4ca8-8821-bcc444aef365] Running
	I1124 13:50:49.702810  639073 system_pods.go:89] "kindnet-hdcbn" [88d22920-c2fd-4bdf-95ec-c2f4f5c22669] Running
	I1124 13:50:49.702817  639073 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-403602" [935e74ec-fe4d-4a3a-b83d-ff0bf904f0d3] Running
	I1124 13:50:49.702830  639073 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-403602" [67672d0f-6473-46c9-9376-665f7eea8cff] Running
	I1124 13:50:49.702837  639073 system_pods.go:89] "kube-proxy-fhwvd" [a8814197-f505-433e-a55d-b0106f40e505] Running
	I1124 13:50:49.702844  639073 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-403602" [febe5d3d-ddbd-4960-ae25-0db79a14c200] Running
	I1124 13:50:49.702854  639073 system_pods.go:89] "storage-provisioner" [649238f9-bcbc-4569-bff7-9488834e21c8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:50:49.703162  639073 retry.go:31] will retry after 406.44142ms: missing components: kube-dns
	I1124 13:50:50.114179  639073 system_pods.go:86] 8 kube-system pods found
	I1124 13:50:50.114223  639073 system_pods.go:89] "coredns-66bc5c9577-hrj7f" [f86f95a0-9e92-429a-9dd7-76843d8d6af1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:50:50.114233  639073 system_pods.go:89] "etcd-default-k8s-diff-port-403602" [62d9cce7-ae7f-4ca8-8821-bcc444aef365] Running
	I1124 13:50:50.114240  639073 system_pods.go:89] "kindnet-hdcbn" [88d22920-c2fd-4bdf-95ec-c2f4f5c22669] Running
	I1124 13:50:50.114245  639073 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-403602" [935e74ec-fe4d-4a3a-b83d-ff0bf904f0d3] Running
	I1124 13:50:50.114250  639073 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-403602" [67672d0f-6473-46c9-9376-665f7eea8cff] Running
	I1124 13:50:50.114256  639073 system_pods.go:89] "kube-proxy-fhwvd" [a8814197-f505-433e-a55d-b0106f40e505] Running
	I1124 13:50:50.114261  639073 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-403602" [febe5d3d-ddbd-4960-ae25-0db79a14c200] Running
	I1124 13:50:50.114270  639073 system_pods.go:89] "storage-provisioner" [649238f9-bcbc-4569-bff7-9488834e21c8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:50:50.114290  639073 retry.go:31] will retry after 502.642707ms: missing components: kube-dns
	I1124 13:50:50.622594  639073 system_pods.go:86] 8 kube-system pods found
	I1124 13:50:50.622638  639073 system_pods.go:89] "coredns-66bc5c9577-hrj7f" [f86f95a0-9e92-429a-9dd7-76843d8d6af1] Running
	I1124 13:50:50.622648  639073 system_pods.go:89] "etcd-default-k8s-diff-port-403602" [62d9cce7-ae7f-4ca8-8821-bcc444aef365] Running
	I1124 13:50:50.622656  639073 system_pods.go:89] "kindnet-hdcbn" [88d22920-c2fd-4bdf-95ec-c2f4f5c22669] Running
	I1124 13:50:50.622661  639073 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-403602" [935e74ec-fe4d-4a3a-b83d-ff0bf904f0d3] Running
	I1124 13:50:50.622667  639073 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-403602" [67672d0f-6473-46c9-9376-665f7eea8cff] Running
	I1124 13:50:50.622672  639073 system_pods.go:89] "kube-proxy-fhwvd" [a8814197-f505-433e-a55d-b0106f40e505] Running
	I1124 13:50:50.622678  639073 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-403602" [febe5d3d-ddbd-4960-ae25-0db79a14c200] Running
	I1124 13:50:50.622683  639073 system_pods.go:89] "storage-provisioner" [649238f9-bcbc-4569-bff7-9488834e21c8] Running
	I1124 13:50:50.622693  639073 system_pods.go:126] duration metric: took 1.55940641s to wait for k8s-apps to be running ...
	I1124 13:50:50.622708  639073 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 13:50:50.622762  639073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:50:50.643393  639073 system_svc.go:56] duration metric: took 20.674265ms WaitForService to wait for kubelet
	I1124 13:50:50.643508  639073 kubeadm.go:587] duration metric: took 14.195088836s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:50:50.643535  639073 node_conditions.go:102] verifying NodePressure condition ...
	I1124 13:50:50.648356  639073 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 13:50:50.648394  639073 node_conditions.go:123] node cpu capacity is 8
	I1124 13:50:50.648416  639073 node_conditions.go:105] duration metric: took 4.875317ms to run NodePressure ...
	I1124 13:50:50.648433  639073 start.go:242] waiting for startup goroutines ...
	I1124 13:50:50.648441  639073 start.go:247] waiting for cluster config update ...
	I1124 13:50:50.648455  639073 start.go:256] writing updated cluster config ...
	I1124 13:50:50.648855  639073 ssh_runner.go:195] Run: rm -f paused
	I1124 13:50:50.654587  639073 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:50:50.660298  639073 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hrj7f" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:50:50.669813  639073 pod_ready.go:94] pod "coredns-66bc5c9577-hrj7f" is "Ready"
	I1124 13:50:50.669847  639073 pod_ready.go:86] duration metric: took 9.517505ms for pod "coredns-66bc5c9577-hrj7f" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:50:50.675277  639073 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-403602" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:50:50.685697  639073 pod_ready.go:94] pod "etcd-default-k8s-diff-port-403602" is "Ready"
	I1124 13:50:50.685732  639073 pod_ready.go:86] duration metric: took 10.418008ms for pod "etcd-default-k8s-diff-port-403602" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:50:50.688779  639073 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-403602" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:50:50.698125  639073 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-403602" is "Ready"
	I1124 13:50:50.698161  639073 pod_ready.go:86] duration metric: took 9.348971ms for pod "kube-apiserver-default-k8s-diff-port-403602" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:50:50.702983  639073 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-403602" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:50:51.059776  639073 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-403602" is "Ready"
	I1124 13:50:51.059817  639073 pod_ready.go:86] duration metric: took 356.797917ms for pod "kube-controller-manager-default-k8s-diff-port-403602" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:50:51.261272  639073 pod_ready.go:83] waiting for pod "kube-proxy-fhwvd" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	a9ab60408f323       56cc512116c8f       9 seconds ago       Running             busybox                   0                   c5d92459bd00b       busybox                                      default
	03fe961e764d7       52546a367cc9e       15 seconds ago      Running             coredns                   0                   38bcb7e597a37       coredns-66bc5c9577-rn6dx                     kube-system
	902e0b827ac38       6e38f40d628db       15 seconds ago      Running             storage-provisioner       0                   88128ddc54ac4       storage-provisioner                          kube-system
	5914c57b066be       409467f978b4a       27 seconds ago      Running             kindnet-cni               0                   c2cb83d96d081       kindnet-sq6tm                                kube-system
	2070bf9a47086       fc25172553d79       27 seconds ago      Running             kube-proxy                0                   05dffec676416       kube-proxy-6v565                             kube-system
	85c949a723925       c80c8dbafe7dd       38 seconds ago      Running             kube-controller-manager   0                   a1892275d242f       kube-controller-manager-embed-certs-971503   kube-system
	4d4c96dea1c1c       c3994bc696102       38 seconds ago      Running             kube-apiserver            0                   9ff27781ab7f8       kube-apiserver-embed-certs-971503            kube-system
	c163234ff7ad3       7dd6aaa1717ab       38 seconds ago      Running             kube-scheduler            0                   01d5c3ce57e02       kube-scheduler-embed-certs-971503            kube-system
	ffd3e36daeb72       5f1f5298c888d       38 seconds ago      Running             etcd                      0                   0c01bc25b98c3       etcd-embed-certs-971503                      kube-system
	
	
	==> containerd <==
	Nov 24 13:50:36 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:36.498673622Z" level=info msg="StartContainer for \"902e0b827ac388051eac7d9f68d5880c4b79476d24f7f6056d599b6adeab7723\""
	Nov 24 13:50:36 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:36.500108765Z" level=info msg="connecting to shim 902e0b827ac388051eac7d9f68d5880c4b79476d24f7f6056d599b6adeab7723" address="unix:///run/containerd/s/aed2f838531fc051aae5f30a4ebdae656b0b8dae5aa68e7b19371385baca70ec" protocol=ttrpc version=3
	Nov 24 13:50:36 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:36.505889733Z" level=info msg="CreateContainer within sandbox \"38bcb7e597a3798bd1c14e1053e62d1375ed6ef1c3b634b8f17b54da9be12785\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 24 13:50:36 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:36.525272553Z" level=info msg="Container 03fe961e764d7c8628b5c71ccc3bd4901bd840cb6e683e5795c4f4414f39b122: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 13:50:36 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:36.540886465Z" level=info msg="CreateContainer within sandbox \"38bcb7e597a3798bd1c14e1053e62d1375ed6ef1c3b634b8f17b54da9be12785\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"03fe961e764d7c8628b5c71ccc3bd4901bd840cb6e683e5795c4f4414f39b122\""
	Nov 24 13:50:36 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:36.545314825Z" level=info msg="StartContainer for \"03fe961e764d7c8628b5c71ccc3bd4901bd840cb6e683e5795c4f4414f39b122\""
	Nov 24 13:50:36 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:36.546542695Z" level=info msg="connecting to shim 03fe961e764d7c8628b5c71ccc3bd4901bd840cb6e683e5795c4f4414f39b122" address="unix:///run/containerd/s/57789ae6c48c2d5970e6abf243c6217948592f7b200de2842d7df5688bff575f" protocol=ttrpc version=3
	Nov 24 13:50:36 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:36.632243833Z" level=info msg="StartContainer for \"902e0b827ac388051eac7d9f68d5880c4b79476d24f7f6056d599b6adeab7723\" returns successfully"
	Nov 24 13:50:36 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:36.644888424Z" level=info msg="StartContainer for \"03fe961e764d7c8628b5c71ccc3bd4901bd840cb6e683e5795c4f4414f39b122\" returns successfully"
	Nov 24 13:50:40 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:40.306192122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3f97cea3-5c5e-42af-99d5-9f7a1a3f7dcc,Namespace:default,Attempt:0,}"
	Nov 24 13:50:40 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:40.390138758Z" level=info msg="connecting to shim c5d92459bd00b6cd97d7596fce69259de60b42e27cb4e0ce577931f91218ebe8" address="unix:///run/containerd/s/c7eec423d317874953f67e786b9cc9eeac10d0633f922281da92fc7f6d52dee9" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 13:50:40 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:40.473810719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3f97cea3-5c5e-42af-99d5-9f7a1a3f7dcc,Namespace:default,Attempt:0,} returns sandbox id \"c5d92459bd00b6cd97d7596fce69259de60b42e27cb4e0ce577931f91218ebe8\""
	Nov 24 13:50:40 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:40.476459295Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.512077492Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.512842081Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396641"
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.514367616Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.518128824Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.518984328Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.042472881s"
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.519040677Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.524304727Z" level=info msg="CreateContainer within sandbox \"c5d92459bd00b6cd97d7596fce69259de60b42e27cb4e0ce577931f91218ebe8\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.532905722Z" level=info msg="Container a9ab60408f3233d2440967de0e8ea69eb28b13ab6543f1e8c5b922c4b0a15eb3: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.541301596Z" level=info msg="CreateContainer within sandbox \"c5d92459bd00b6cd97d7596fce69259de60b42e27cb4e0ce577931f91218ebe8\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"a9ab60408f3233d2440967de0e8ea69eb28b13ab6543f1e8c5b922c4b0a15eb3\""
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.542196865Z" level=info msg="StartContainer for \"a9ab60408f3233d2440967de0e8ea69eb28b13ab6543f1e8c5b922c4b0a15eb3\""
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.543218914Z" level=info msg="connecting to shim a9ab60408f3233d2440967de0e8ea69eb28b13ab6543f1e8c5b922c4b0a15eb3" address="unix:///run/containerd/s/c7eec423d317874953f67e786b9cc9eeac10d0633f922281da92fc7f6d52dee9" protocol=ttrpc version=3
	Nov 24 13:50:42 embed-certs-971503 containerd[666]: time="2025-11-24T13:50:42.600897748Z" level=info msg="StartContainer for \"a9ab60408f3233d2440967de0e8ea69eb28b13ab6543f1e8c5b922c4b0a15eb3\" returns successfully"
	
	
	==> coredns [03fe961e764d7c8628b5c71ccc3bd4901bd840cb6e683e5795c4f4414f39b122] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57449 - 39200 "HINFO IN 8583295162172501320.1839365236584150202. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.100038879s
	
	
	==> describe nodes <==
	Name:               embed-certs-971503
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-971503
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=embed-certs-971503
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_50_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:50:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-971503
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:50:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:50:49 +0000   Mon, 24 Nov 2025 13:50:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:50:49 +0000   Mon, 24 Nov 2025 13:50:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:50:49 +0000   Mon, 24 Nov 2025 13:50:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:50:49 +0000   Mon, 24 Nov 2025 13:50:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-971503
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                960977f5-8c2d-4dbc-a619-abd3283e065f
	  Boot ID:                    715d4626-373f-499b-b5de-b6d832ce4fe4
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-rn6dx                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-embed-certs-971503                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-sq6tm                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-embed-certs-971503             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-embed-certs-971503    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-6v565                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-embed-certs-971503             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  40s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  39s (x8 over 40s)  kubelet          Node embed-certs-971503 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s (x8 over 40s)  kubelet          Node embed-certs-971503 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s (x7 over 40s)  kubelet          Node embed-certs-971503 status is now: NodeHasSufficientPID
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  34s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  34s                kubelet          Node embed-certs-971503 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s                kubelet          Node embed-certs-971503 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s                kubelet          Node embed-certs-971503 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node embed-certs-971503 event: Registered Node embed-certs-971503 in Controller
	  Normal  NodeReady                17s                kubelet          Node embed-certs-971503 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 91 30 bc 58 af 08 06
	[Nov24 12:45] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a fb 84 7d 9e 9e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0a 91 30 bc 58 af 08 06
	[ +25.292047] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff da 14 b4 9b 3e 8f 08 06
	[  +0.024207] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 06 8e 71 0b 76 c3 08 06
	[ +16.768103] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 45 b6 ad fe 93 08 06
	[  +5.950770] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e b5 4a 70 0a 35 08 06
	[Nov24 12:46] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e 8b d0 4a da 7e 08 06
	[  +0.000557] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e b5 4a 70 0a 35 08 06
	[  +1.903671] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 1f e8 fc 59 74 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da 14 b4 9b 3e 8f 08 06
	[ +17.535584] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 31 ec 7c 1d 38 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 45 b6 ad fe 93 08 06
	
	
	==> etcd [ffd3e36daeb7225019c06b3e57efdea55f1463f1d72e997c0f78f1bf1d568f51] <==
	{"level":"warn","ts":"2025-11-24T13:50:15.162231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.194361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.223289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.237057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.245608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.255835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.265305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.275888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.284773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.293101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.302086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.318454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.322634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.331586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.342902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:15.409789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46220","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T13:50:24.640989Z","caller":"traceutil/trace.go:172","msg":"trace[784258116] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"103.368273ms","start":"2025-11-24T13:50:24.537595Z","end":"2025-11-24T13:50:24.640963Z","steps":["trace[784258116] 'process raft request'  (duration: 103.212478ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:50:39.583450Z","caller":"traceutil/trace.go:172","msg":"trace[20253060] linearizableReadLoop","detail":"{readStateIndex:476; appliedIndex:476; }","duration":"144.322539ms","start":"2025-11-24T13:50:39.439099Z","end":"2025-11-24T13:50:39.583422Z","steps":["trace[20253060] 'read index received'  (duration: 144.311573ms)","trace[20253060] 'applied index is now lower than readState.Index'  (duration: 9.645µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T13:50:39.583589Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"144.455158ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T13:50:39.583679Z","caller":"traceutil/trace.go:172","msg":"trace[119140308] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:461; }","duration":"144.572561ms","start":"2025-11-24T13:50:39.439093Z","end":"2025-11-24T13:50:39.583666Z","steps":["trace[119140308] 'agreement among raft nodes before linearized reading'  (duration: 144.412549ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T13:50:39.583777Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"142.011915ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T13:50:39.583833Z","caller":"traceutil/trace.go:172","msg":"trace[1827108258] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:462; }","duration":"142.079218ms","start":"2025-11-24T13:50:39.441743Z","end":"2025-11-24T13:50:39.583822Z","steps":["trace[1827108258] 'agreement among raft nodes before linearized reading'  (duration: 141.988852ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:50:39.583832Z","caller":"traceutil/trace.go:172","msg":"trace[1806007110] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"155.737315ms","start":"2025-11-24T13:50:39.428080Z","end":"2025-11-24T13:50:39.583817Z","steps":["trace[1806007110] 'process raft request'  (duration: 155.37479ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:50:39.729093Z","caller":"traceutil/trace.go:172","msg":"trace[1405680079] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"141.818569ms","start":"2025-11-24T13:50:39.587244Z","end":"2025-11-24T13:50:39.729063Z","steps":["trace[1405680079] 'process raft request'  (duration: 127.397287ms)","trace[1405680079] 'compare'  (duration: 14.202296ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T13:50:40.006812Z","caller":"traceutil/trace.go:172","msg":"trace[1510918285] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"132.558219ms","start":"2025-11-24T13:50:39.874232Z","end":"2025-11-24T13:50:40.006791Z","steps":["trace[1510918285] 'process raft request'  (duration: 132.441084ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:50:52 up  2:33,  0 user,  load average: 5.41, 3.61, 2.31
	Linux embed-certs-971503 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5914c57b066be204389d90bfe7aeb5e3db92f6c228983299bb27fea23671aace] <==
	I1124 13:50:25.551982       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 13:50:25.552463       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 13:50:25.552792       1 main.go:148] setting mtu 1500 for CNI 
	I1124 13:50:25.552953       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 13:50:25.552985       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T13:50:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 13:50:25.874069       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 13:50:25.874176       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 13:50:25.874196       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 13:50:25.884325       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 13:50:26.076334       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 13:50:26.076385       1 metrics.go:72] Registering metrics
	I1124 13:50:26.076449       1 controller.go:711] "Syncing nftables rules"
	I1124 13:50:35.865234       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:50:35.865299       1 main.go:301] handling current node
	I1124 13:50:45.864948       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:50:45.865013       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4d4c96dea1c1ca7b866ccee2653eabf5ae5fd0a8eeb603e57a8901e9d474ccf3] <==
	E1124 13:50:16.099032       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1124 13:50:16.142473       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 13:50:16.147733       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 13:50:16.150611       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:50:16.160390       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:50:16.160757       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 13:50:16.274906       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 13:50:16.946221       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 13:50:16.954311       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 13:50:16.954329       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 13:50:17.887861       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 13:50:17.936349       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 13:50:17.990065       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 13:50:18.054449       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 13:50:18.062795       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1124 13:50:18.064142       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 13:50:18.070417       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 13:50:18.866890       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 13:50:18.879824       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 13:50:18.892093       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 13:50:23.093639       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 13:50:23.897080       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:50:23.902647       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:50:24.102276       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1124 13:50:48.572011       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:48634: use of closed network connection
	
	
	==> kube-controller-manager [85c949a723925862fb7aea2e303b50f684e0ffbc8e97734a1fa52293509d4ae6] <==
	I1124 13:50:22.991369       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 13:50:22.992324       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 13:50:22.993204       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 13:50:22.993216       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 13:50:22.994370       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 13:50:22.994390       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 13:50:22.995976       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:50:22.997091       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 13:50:22.997178       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 13:50:22.997251       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 13:50:22.997260       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 13:50:22.997267       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 13:50:23.000337       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 13:50:23.008384       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-971503" podCIDRs=["10.244.0.0/24"]
	I1124 13:50:23.015391       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 13:50:23.015518       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 13:50:23.015959       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 13:50:23.016323       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 13:50:23.018483       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:50:23.019690       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 13:50:23.019825       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 13:50:23.019972       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-971503"
	I1124 13:50:23.020048       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 13:50:23.028392       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 13:50:38.022657       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2070bf9a4708666ced634ffb7847907fc0d7071448fb6af6d357d643fba294b2] <==
	I1124 13:50:25.094123       1 server_linux.go:53] "Using iptables proxy"
	I1124 13:50:25.147348       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 13:50:25.248959       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 13:50:25.249016       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1124 13:50:25.249117       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 13:50:25.300868       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:50:25.301013       1 server_linux.go:132] "Using iptables Proxier"
	I1124 13:50:25.323034       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 13:50:25.324134       1 server.go:527] "Version info" version="v1.34.1"
	I1124 13:50:25.324292       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:50:25.331372       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 13:50:25.331473       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 13:50:25.331495       1 config.go:200] "Starting service config controller"
	I1124 13:50:25.333717       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 13:50:25.331695       1 config.go:106] "Starting endpoint slice config controller"
	I1124 13:50:25.334209       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 13:50:25.334895       1 config.go:309] "Starting node config controller"
	I1124 13:50:25.335688       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 13:50:25.336035       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 13:50:25.433553       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 13:50:25.434806       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 13:50:25.434966       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c163234ff7ad3dc9ac0841e5d5172ff77e045691de7b1aab98c5df56611d396c] <==
	E1124 13:50:16.079872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 13:50:16.079889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:50:16.079765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:50:16.080108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 13:50:16.080282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 13:50:16.887614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 13:50:16.988454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 13:50:16.992169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:50:17.001643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:50:17.101626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 13:50:17.141857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:50:17.181257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:50:17.185129       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 13:50:17.222757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 13:50:17.263405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 13:50:17.366368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:50:17.380294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:50:17.419199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 13:50:17.428685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:50:17.466295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:50:17.588138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 13:50:17.610734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 13:50:17.614406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:50:17.618752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1124 13:50:20.572735       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 13:50:19 embed-certs-971503 kubelet[1447]: I1124 13:50:19.781361    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-971503" podStartSLOduration=1.781337789 podStartE2EDuration="1.781337789s" podCreationTimestamp="2025-11-24 13:50:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:50:19.781294102 +0000 UTC m=+1.170245131" watchObservedRunningTime="2025-11-24 13:50:19.781337789 +0000 UTC m=+1.170288801"
	Nov 24 13:50:19 embed-certs-971503 kubelet[1447]: E1124 13:50:19.783254    1447 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-embed-certs-971503\" already exists" pod="kube-system/etcd-embed-certs-971503"
	Nov 24 13:50:19 embed-certs-971503 kubelet[1447]: E1124 13:50:19.784231    1447 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-embed-certs-971503\" already exists" pod="kube-system/kube-scheduler-embed-certs-971503"
	Nov 24 13:50:19 embed-certs-971503 kubelet[1447]: E1124 13:50:19.784233    1447 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-embed-certs-971503\" already exists" pod="kube-system/kube-apiserver-embed-certs-971503"
	Nov 24 13:50:23 embed-certs-971503 kubelet[1447]: I1124 13:50:23.031266    1447 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 13:50:23 embed-certs-971503 kubelet[1447]: I1124 13:50:23.032127    1447 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 13:50:24 embed-certs-971503 kubelet[1447]: I1124 13:50:24.252000    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c305a92d-fa9b-4b8a-baf4-d95e66619f08-lib-modules\") pod \"kube-proxy-6v565\" (UID: \"c305a92d-fa9b-4b8a-baf4-d95e66619f08\") " pod="kube-system/kube-proxy-6v565"
	Nov 24 13:50:24 embed-certs-971503 kubelet[1447]: I1124 13:50:24.252060    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dfae42e1-154e-45f7-b1bd-86d3826cf4bf-cni-cfg\") pod \"kindnet-sq6tm\" (UID: \"dfae42e1-154e-45f7-b1bd-86d3826cf4bf\") " pod="kube-system/kindnet-sq6tm"
	Nov 24 13:50:24 embed-certs-971503 kubelet[1447]: I1124 13:50:24.252099    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9bdj\" (UniqueName: \"kubernetes.io/projected/dfae42e1-154e-45f7-b1bd-86d3826cf4bf-kube-api-access-q9bdj\") pod \"kindnet-sq6tm\" (UID: \"dfae42e1-154e-45f7-b1bd-86d3826cf4bf\") " pod="kube-system/kindnet-sq6tm"
	Nov 24 13:50:24 embed-certs-971503 kubelet[1447]: I1124 13:50:24.252122    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c305a92d-fa9b-4b8a-baf4-d95e66619f08-kube-proxy\") pod \"kube-proxy-6v565\" (UID: \"c305a92d-fa9b-4b8a-baf4-d95e66619f08\") " pod="kube-system/kube-proxy-6v565"
	Nov 24 13:50:24 embed-certs-971503 kubelet[1447]: I1124 13:50:24.252143    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c305a92d-fa9b-4b8a-baf4-d95e66619f08-xtables-lock\") pod \"kube-proxy-6v565\" (UID: \"c305a92d-fa9b-4b8a-baf4-d95e66619f08\") " pod="kube-system/kube-proxy-6v565"
	Nov 24 13:50:24 embed-certs-971503 kubelet[1447]: I1124 13:50:24.252182    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94gkz\" (UniqueName: \"kubernetes.io/projected/c305a92d-fa9b-4b8a-baf4-d95e66619f08-kube-api-access-94gkz\") pod \"kube-proxy-6v565\" (UID: \"c305a92d-fa9b-4b8a-baf4-d95e66619f08\") " pod="kube-system/kube-proxy-6v565"
	Nov 24 13:50:24 embed-certs-971503 kubelet[1447]: I1124 13:50:24.252203    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfae42e1-154e-45f7-b1bd-86d3826cf4bf-xtables-lock\") pod \"kindnet-sq6tm\" (UID: \"dfae42e1-154e-45f7-b1bd-86d3826cf4bf\") " pod="kube-system/kindnet-sq6tm"
	Nov 24 13:50:24 embed-certs-971503 kubelet[1447]: I1124 13:50:24.252235    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfae42e1-154e-45f7-b1bd-86d3826cf4bf-lib-modules\") pod \"kindnet-sq6tm\" (UID: \"dfae42e1-154e-45f7-b1bd-86d3826cf4bf\") " pod="kube-system/kindnet-sq6tm"
	Nov 24 13:50:25 embed-certs-971503 kubelet[1447]: I1124 13:50:25.820468    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6v565" podStartSLOduration=1.8204401639999999 podStartE2EDuration="1.820440164s" podCreationTimestamp="2025-11-24 13:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:50:25.817394098 +0000 UTC m=+7.206345131" watchObservedRunningTime="2025-11-24 13:50:25.820440164 +0000 UTC m=+7.209391194"
	Nov 24 13:50:25 embed-certs-971503 kubelet[1447]: I1124 13:50:25.869086    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-sq6tm" podStartSLOduration=1.869064336 podStartE2EDuration="1.869064336s" podCreationTimestamp="2025-11-24 13:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:50:25.846243674 +0000 UTC m=+7.235194704" watchObservedRunningTime="2025-11-24 13:50:25.869064336 +0000 UTC m=+7.258015368"
	Nov 24 13:50:35 embed-certs-971503 kubelet[1447]: I1124 13:50:35.962971    1447 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 13:50:36 embed-certs-971503 kubelet[1447]: I1124 13:50:36.037657    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/decde4d2-8595-422a-a5e9-8f5b2019833e-config-volume\") pod \"coredns-66bc5c9577-rn6dx\" (UID: \"decde4d2-8595-422a-a5e9-8f5b2019833e\") " pod="kube-system/coredns-66bc5c9577-rn6dx"
	Nov 24 13:50:36 embed-certs-971503 kubelet[1447]: I1124 13:50:36.037699    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5r4r\" (UniqueName: \"kubernetes.io/projected/decde4d2-8595-422a-a5e9-8f5b2019833e-kube-api-access-w5r4r\") pod \"coredns-66bc5c9577-rn6dx\" (UID: \"decde4d2-8595-422a-a5e9-8f5b2019833e\") " pod="kube-system/coredns-66bc5c9577-rn6dx"
	Nov 24 13:50:36 embed-certs-971503 kubelet[1447]: I1124 13:50:36.037724    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jhc6\" (UniqueName: \"kubernetes.io/projected/faebcb7e-12bd-45e5-a6f6-420848719e73-kube-api-access-7jhc6\") pod \"storage-provisioner\" (UID: \"faebcb7e-12bd-45e5-a6f6-420848719e73\") " pod="kube-system/storage-provisioner"
	Nov 24 13:50:36 embed-certs-971503 kubelet[1447]: I1124 13:50:36.037741    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/faebcb7e-12bd-45e5-a6f6-420848719e73-tmp\") pod \"storage-provisioner\" (UID: \"faebcb7e-12bd-45e5-a6f6-420848719e73\") " pod="kube-system/storage-provisioner"
	Nov 24 13:50:36 embed-certs-971503 kubelet[1447]: I1124 13:50:36.898491    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rn6dx" podStartSLOduration=12.898464625999999 podStartE2EDuration="12.898464626s" podCreationTimestamp="2025-11-24 13:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:50:36.8654932 +0000 UTC m=+18.254444230" watchObservedRunningTime="2025-11-24 13:50:36.898464626 +0000 UTC m=+18.287415659"
	Nov 24 13:50:39 embed-certs-971503 kubelet[1447]: I1124 13:50:39.585320    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.58529448 podStartE2EDuration="15.58529448s" podCreationTimestamp="2025-11-24 13:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:50:36.940747632 +0000 UTC m=+18.329698675" watchObservedRunningTime="2025-11-24 13:50:39.58529448 +0000 UTC m=+20.974245511"
	Nov 24 13:50:39 embed-certs-971503 kubelet[1447]: I1124 13:50:39.864816    1447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47vm9\" (UniqueName: \"kubernetes.io/projected/3f97cea3-5c5e-42af-99d5-9f7a1a3f7dcc-kube-api-access-47vm9\") pod \"busybox\" (UID: \"3f97cea3-5c5e-42af-99d5-9f7a1a3f7dcc\") " pod="default/busybox"
	Nov 24 13:50:42 embed-certs-971503 kubelet[1447]: I1124 13:50:42.885214    1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.840835744 podStartE2EDuration="3.885193289s" podCreationTimestamp="2025-11-24 13:50:39 +0000 UTC" firstStartedPulling="2025-11-24 13:50:40.475809678 +0000 UTC m=+21.864760691" lastFinishedPulling="2025-11-24 13:50:42.520167213 +0000 UTC m=+23.909118236" observedRunningTime="2025-11-24 13:50:42.885008365 +0000 UTC m=+24.273959395" watchObservedRunningTime="2025-11-24 13:50:42.885193289 +0000 UTC m=+24.274144318"
	
	
	==> storage-provisioner [902e0b827ac388051eac7d9f68d5880c4b79476d24f7f6056d599b6adeab7723] <==
	I1124 13:50:36.662642       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 13:50:36.666666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:36.677628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 13:50:36.677819       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 13:50:36.678078       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-971503_69704622-0a0d-4cc9-a4e7-07c848af476e!
	I1124 13:50:36.678141       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"92a66980-4b02-4382-ab72-46a2cccd67dc", APIVersion:"v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-971503_69704622-0a0d-4cc9-a4e7-07c848af476e became leader
	W1124 13:50:36.692305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:36.700713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 13:50:36.779034       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-971503_69704622-0a0d-4cc9-a4e7-07c848af476e!
	W1124 13:50:38.706049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:38.797087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:40.800604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:40.806518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:42.810164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:42.814420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:44.818471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:44.823113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:46.827317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:46.832748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:48.836694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:48.841769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:50.846023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:50.851628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:52.855733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:52.861569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-971503 -n embed-certs-971503
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-971503 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (14.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (15.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-403602 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6d25c78c-49dd-42e4-ba09-01c98b5c9084] Pending
helpers_test.go:352: "busybox" [6d25c78c-49dd-42e4-ba09-01c98b5c9084] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6d25c78c-49dd-42e4-ba09-01c98b5c9084] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005324901s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-403602 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-403602
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-403602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d7d8b69e2810bd57f33d616c47baac6fc9f36689058868f9b0704648eb21908a",
	        "Created": "2025-11-24T13:50:06.558943808Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 641033,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:50:07.877101179Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/d7d8b69e2810bd57f33d616c47baac6fc9f36689058868f9b0704648eb21908a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d7d8b69e2810bd57f33d616c47baac6fc9f36689058868f9b0704648eb21908a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d7d8b69e2810bd57f33d616c47baac6fc9f36689058868f9b0704648eb21908a/hosts",
	        "LogPath": "/var/lib/docker/containers/d7d8b69e2810bd57f33d616c47baac6fc9f36689058868f9b0704648eb21908a/d7d8b69e2810bd57f33d616c47baac6fc9f36689058868f9b0704648eb21908a-json.log",
	        "Name": "/default-k8s-diff-port-403602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-403602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-403602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d7d8b69e2810bd57f33d616c47baac6fc9f36689058868f9b0704648eb21908a",
	                "LowerDir": "/var/lib/docker/overlay2/a78bf6dc2cc9390c44b67aeb62618f001c2be55055b5ab9820ec6a5547228d39-init/diff:/var/lib/docker/overlay2/0f013e03fd0eaee4efc608fb0376e7d3e8ba628388f5191310c2259ab273ad26/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a78bf6dc2cc9390c44b67aeb62618f001c2be55055b5ab9820ec6a5547228d39/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a78bf6dc2cc9390c44b67aeb62618f001c2be55055b5ab9820ec6a5547228d39/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a78bf6dc2cc9390c44b67aeb62618f001c2be55055b5ab9820ec6a5547228d39/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-403602",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-403602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-403602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-403602",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-403602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7ea97673605424b7cc9f139fc9f57f7d973548c311822abe4a54c35fef0f20de",
	            "SandboxKey": "/var/run/docker/netns/7ea976736054",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-403602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "23536668cac4e11b8ee87fcaff8957af4fe5e5db7db5467659c7658d1fa2205c",
	                    "EndpointID": "c8beb84479653474f72f2d17bcbcdcbbdf55d01c93286b741881afb423c8b070",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "82:94:76:cb:60:5a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-403602",
	                        "d7d8b69e2810"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-403602 -n default-k8s-diff-port-403602
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-403602 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-403602 logs -n 25: (3.194336587s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ image   │ no-preload-608395 image list --format=json                                                                                                                                                                                                          │ no-preload-608395            │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ pause   │ -p no-preload-608395 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-608395            │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ unpause │ -p no-preload-608395 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-608395            │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ delete  │ -p no-preload-608395                                                                                                                                                                                                                                │ no-preload-608395            │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:50 UTC │
	│ delete  │ -p cert-expiration-099863                                                                                                                                                                                                                           │ cert-expiration-099863       │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:50 UTC │
	│ delete  │ -p no-preload-608395                                                                                                                                                                                                                                │ no-preload-608395            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ delete  │ -p disable-driver-mounts-312087                                                                                                                                                                                                                     │ disable-driver-mounts-312087 │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ start   │ -p default-k8s-diff-port-403602 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-403602 │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ start   │ -p newest-cni-846862 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ start   │ -p kubernetes-upgrade-358357 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-358357    │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │                     │
	│ start   │ -p kubernetes-upgrade-358357 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-358357    │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ delete  │ -p kubernetes-upgrade-358357                                                                                                                                                                                                                        │ kubernetes-upgrade-358357    │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ start   │ -p auto-355661 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-355661                  │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-846862 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ stop    │ -p newest-cni-846862 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ addons  │ enable dashboard -p newest-cni-846862 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ start   │ -p newest-cni-846862 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ image   │ newest-cni-846862 image list --format=json                                                                                                                                                                                                          │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ addons  │ enable metrics-server -p embed-certs-971503 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-971503           │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ pause   │ -p newest-cni-846862 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ stop    │ -p embed-certs-971503 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-971503           │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │                     │
	│ unpause │ -p newest-cni-846862 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ delete  │ -p newest-cni-846862                                                                                                                                                                                                                                │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ delete  │ -p newest-cni-846862                                                                                                                                                                                                                                │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ start   │ -p kindnet-355661 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd                                                                                                      │ kindnet-355661               │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:50:59
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:50:59.581214  658791 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:50:59.581490  658791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:50:59.581502  658791 out.go:374] Setting ErrFile to fd 2...
	I1124 13:50:59.581507  658791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:50:59.581745  658791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
	I1124 13:50:59.582319  658791 out.go:368] Setting JSON to false
	I1124 13:50:59.583628  658791 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9199,"bootTime":1763983061,"procs":368,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:50:59.583694  658791 start.go:143] virtualization: kvm guest
	I1124 13:50:59.585751  658791 out.go:179] * [kindnet-355661] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:50:59.587217  658791 notify.go:221] Checking for updates...
	I1124 13:50:59.587257  658791 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:50:59.588966  658791 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:50:59.590462  658791 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:50:59.591787  658791 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-370498/.minikube
	I1124 13:50:59.593331  658791 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:50:59.594690  658791 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:50:59.596774  658791 config.go:182] Loaded profile config "auto-355661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:50:59.596927  658791 config.go:182] Loaded profile config "default-k8s-diff-port-403602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:50:59.597213  658791 config.go:182] Loaded profile config "embed-certs-971503": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:50:59.597428  658791 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:50:59.628024  658791 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:50:59.628134  658791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:50:59.692021  658791 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-24 13:50:59.680841906 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:50:59.692135  658791 docker.go:319] overlay module found
	I1124 13:50:59.694156  658791 out.go:179] * Using the docker driver based on user configuration
	I1124 13:50:59.695564  658791 start.go:309] selected driver: docker
	I1124 13:50:59.695580  658791 start.go:927] validating driver "docker" against <nil>
	I1124 13:50:59.695596  658791 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:50:59.696316  658791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:50:59.757527  658791 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-24 13:50:59.74777331 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:50:59.757701  658791 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:50:59.758029  658791 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:50:59.759768  658791 out.go:179] * Using Docker driver with root privileges
	I1124 13:50:59.761206  658791 cni.go:84] Creating CNI manager for "kindnet"
	I1124 13:50:59.761229  658791 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:50:59.761311  658791 start.go:353] cluster config:
	{Name:kindnet-355661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-355661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:50:59.762848  658791 out.go:179] * Starting "kindnet-355661" primary control-plane node in "kindnet-355661" cluster
	I1124 13:50:59.764343  658791 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 13:50:59.766041  658791 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:50:59.767390  658791 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 13:50:59.767431  658791 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1124 13:50:59.767441  658791 cache.go:65] Caching tarball of preloaded images
	I1124 13:50:59.767474  658791 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:50:59.767535  658791 preload.go:238] Found /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1124 13:50:59.767546  658791 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1124 13:50:59.767637  658791 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/kindnet-355661/config.json ...
	I1124 13:50:59.767676  658791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/kindnet-355661/config.json: {Name:mkacc2c260866bd710df10f4c0d3b61b71d60887 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:50:59.791299  658791 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 13:50:59.791322  658791 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 13:50:59.791340  658791 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:50:59.791385  658791 start.go:360] acquireMachinesLock for kindnet-355661: {Name:mkba4e34a9cc28606819724c72ec84b43ff60956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:50:59.791519  658791 start.go:364] duration metric: took 93.767µs to acquireMachinesLock for "kindnet-355661"
	I1124 13:50:59.791551  658791 start.go:93] Provisioning new machine with config: &{Name:kindnet-355661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-355661 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 13:50:59.791624  658791 start.go:125] createHost starting for "" (driver="docker")
	I1124 13:50:59.624873  648989 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 13:51:00.197245  648989 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 13:51:00.198703  648989 kubeadm.go:319] 
	I1124 13:51:00.198832  648989 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 13:51:00.198849  648989 kubeadm.go:319] 
	I1124 13:51:00.198982  648989 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 13:51:00.198996  648989 kubeadm.go:319] 
	I1124 13:51:00.199078  648989 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 13:51:00.199173  648989 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 13:51:00.199255  648989 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 13:51:00.199271  648989 kubeadm.go:319] 
	I1124 13:51:00.199357  648989 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 13:51:00.199369  648989 kubeadm.go:319] 
	I1124 13:51:00.199446  648989 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 13:51:00.199459  648989 kubeadm.go:319] 
	I1124 13:51:00.199589  648989 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 13:51:00.199707  648989 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 13:51:00.199837  648989 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 13:51:00.199855  648989 kubeadm.go:319] 
	I1124 13:51:00.200136  648989 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 13:51:00.200313  648989 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 13:51:00.200325  648989 kubeadm.go:319] 
	I1124 13:51:00.200435  648989 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jld3bc.in00pmtgg8e3apf8 \
	I1124 13:51:00.200586  648989 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:32fb1839a00503b33822b75b81c2f42d5061d18404c0a5cd12189dec7e20658c \
	I1124 13:51:00.200622  648989 kubeadm.go:319] 	--control-plane 
	I1124 13:51:00.200628  648989 kubeadm.go:319] 
	I1124 13:51:00.200730  648989 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 13:51:00.200736  648989 kubeadm.go:319] 
	I1124 13:51:00.200843  648989 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jld3bc.in00pmtgg8e3apf8 \
	I1124 13:51:00.200998  648989 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:32fb1839a00503b33822b75b81c2f42d5061d18404c0a5cd12189dec7e20658c 
	I1124 13:51:00.204454  648989 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 13:51:00.204633  648989 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 13:51:00.204673  648989 cni.go:84] Creating CNI manager for ""
	I1124 13:51:00.204683  648989 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:51:00.207312  648989 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	48d0cffedb5b1       56cc512116c8f       7 seconds ago       Running             busybox                   0                   2f7207971706b       busybox                                                default
	6bef39517b695       52546a367cc9e       13 seconds ago      Running             coredns                   0                   3de73b9a05408       coredns-66bc5c9577-hrj7f                               kube-system
	d1388e232ef6e       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   305e61c0ce3bf       storage-provisioner                                    kube-system
	f6e81cb7e0039       409467f978b4a       25 seconds ago      Running             kindnet-cni               0                   8585149c57f4f       kindnet-hdcbn                                          kube-system
	6c7be965ca472       fc25172553d79       25 seconds ago      Running             kube-proxy                0                   0463a70bc1fcf       kube-proxy-fhwvd                                       kube-system
	88ce89936da65       7dd6aaa1717ab       37 seconds ago      Running             kube-scheduler            0                   83ec108129f18       kube-scheduler-default-k8s-diff-port-403602            kube-system
	4bd8e7a420646       5f1f5298c888d       37 seconds ago      Running             etcd                      0                   aee3f29e2e81c       etcd-default-k8s-diff-port-403602                      kube-system
	7ea26c900e6ae       c3994bc696102       37 seconds ago      Running             kube-apiserver            0                   cc317a0a62088       kube-apiserver-default-k8s-diff-port-403602            kube-system
	5a610f981abf4       c80c8dbafe7dd       37 seconds ago      Running             kube-controller-manager   0                   e0f657fa33167       kube-controller-manager-default-k8s-diff-port-403602   kube-system
	
	
	==> containerd <==
	Nov 24 13:50:49 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:49.157865026Z" level=info msg="connecting to shim d1388e232ef6e2ad16cd2fbe73c16bd9d1d16e11c9449b5c1ba02959d4f60694" address="unix:///run/containerd/s/6d2de559c65fdf680403157d4482e1d177495e390a7c7883fde152a8dd9475ae" protocol=ttrpc version=3
	Nov 24 13:50:49 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:49.187509762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hrj7f,Uid:f86f95a0-9e92-429a-9dd7-76843d8d6af1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3de73b9a05408da56af3e2a664ca600f3c0e07e943e1bf972db36cbada943b6d\""
	Nov 24 13:50:49 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:49.205753414Z" level=info msg="CreateContainer within sandbox \"3de73b9a05408da56af3e2a664ca600f3c0e07e943e1bf972db36cbada943b6d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 24 13:50:49 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:49.223754475Z" level=info msg="Container 6bef39517b6954efbeb122d447973d5e570099e947f0d694a89649b24f0a848c: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 13:50:49 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:49.242500116Z" level=info msg="CreateContainer within sandbox \"3de73b9a05408da56af3e2a664ca600f3c0e07e943e1bf972db36cbada943b6d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6bef39517b6954efbeb122d447973d5e570099e947f0d694a89649b24f0a848c\""
	Nov 24 13:50:49 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:49.251543617Z" level=info msg="StartContainer for \"6bef39517b6954efbeb122d447973d5e570099e947f0d694a89649b24f0a848c\""
	Nov 24 13:50:49 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:49.255647310Z" level=info msg="connecting to shim 6bef39517b6954efbeb122d447973d5e570099e947f0d694a89649b24f0a848c" address="unix:///run/containerd/s/c756f4a6064bf6d3d6001f431f8d6fcd6609f2410b96cd654c4174a12947bdb1" protocol=ttrpc version=3
	Nov 24 13:50:49 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:49.321474357Z" level=info msg="StartContainer for \"d1388e232ef6e2ad16cd2fbe73c16bd9d1d16e11c9449b5c1ba02959d4f60694\" returns successfully"
	Nov 24 13:50:49 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:49.367004508Z" level=info msg="StartContainer for \"6bef39517b6954efbeb122d447973d5e570099e947f0d694a89649b24f0a848c\" returns successfully"
	Nov 24 13:50:52 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:52.864012405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:6d25c78c-49dd-42e4-ba09-01c98b5c9084,Namespace:default,Attempt:0,}"
	Nov 24 13:50:52 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:52.906177098Z" level=info msg="connecting to shim 2f7207971706bad71c19566cbd05c87a67b07164a6d6c9fd08c272c20f1b4d31" address="unix:///run/containerd/s/01cd4962d5bcb0efe27b71fd22ac01f9cf4e8fc92cae96dab21bb1cd24e148a3" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 13:50:53 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:53.010236585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:6d25c78c-49dd-42e4-ba09-01c98b5c9084,Namespace:default,Attempt:0,} returns sandbox id \"2f7207971706bad71c19566cbd05c87a67b07164a6d6c9fd08c272c20f1b4d31\""
	Nov 24 13:50:53 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:53.017264743Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.107126140Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.107624899Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396645"
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.108886187Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.112103176Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.112844487Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.095384704s"
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.112896963Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.119266267Z" level=info msg="CreateContainer within sandbox \"2f7207971706bad71c19566cbd05c87a67b07164a6d6c9fd08c272c20f1b4d31\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.130979458Z" level=info msg="Container 48d0cffedb5b1f2691c8a34b1f5f72bcec35ea4209867f9086fc28d2eddd7e53: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.139423038Z" level=info msg="CreateContainer within sandbox \"2f7207971706bad71c19566cbd05c87a67b07164a6d6c9fd08c272c20f1b4d31\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"48d0cffedb5b1f2691c8a34b1f5f72bcec35ea4209867f9086fc28d2eddd7e53\""
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.141772445Z" level=info msg="StartContainer for \"48d0cffedb5b1f2691c8a34b1f5f72bcec35ea4209867f9086fc28d2eddd7e53\""
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.142850063Z" level=info msg="connecting to shim 48d0cffedb5b1f2691c8a34b1f5f72bcec35ea4209867f9086fc28d2eddd7e53" address="unix:///run/containerd/s/01cd4962d5bcb0efe27b71fd22ac01f9cf4e8fc92cae96dab21bb1cd24e148a3" protocol=ttrpc version=3
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.221668184Z" level=info msg="StartContainer for \"48d0cffedb5b1f2691c8a34b1f5f72bcec35ea4209867f9086fc28d2eddd7e53\" returns successfully"
	
	
	==> coredns [6bef39517b6954efbeb122d447973d5e570099e947f0d694a89649b24f0a848c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60306 - 9022 "HINFO IN 4062326778416799386.5406779658318267711. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.078824025s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-403602
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-403602
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=default-k8s-diff-port-403602
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_50_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:50:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-403602
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:51:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:51:01 +0000   Mon, 24 Nov 2025 13:50:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:51:01 +0000   Mon, 24 Nov 2025 13:50:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:51:01 +0000   Mon, 24 Nov 2025 13:50:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:51:01 +0000   Mon, 24 Nov 2025 13:50:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-403602
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                cd5662fa-7882-4163-8a73-93b2c89779bf
	  Boot ID:                    715d4626-373f-499b-b5de-b6d832ce4fe4
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-hrj7f                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-default-k8s-diff-port-403602                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-hdcbn                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-403602             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-403602    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-fhwvd                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-403602             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s (x8 over 38s)  kubelet          Node default-k8s-diff-port-403602 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x8 over 38s)  kubelet          Node default-k8s-diff-port-403602 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x7 over 38s)  kubelet          Node default-k8s-diff-port-403602 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  38s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  31s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node default-k8s-diff-port-403602 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node default-k8s-diff-port-403602 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node default-k8s-diff-port-403602 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node default-k8s-diff-port-403602 event: Registered Node default-k8s-diff-port-403602 in Controller
	  Normal  NodeReady                14s                kubelet          Node default-k8s-diff-port-403602 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 91 30 bc 58 af 08 06
	[Nov24 12:45] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a fb 84 7d 9e 9e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0a 91 30 bc 58 af 08 06
	[ +25.292047] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff da 14 b4 9b 3e 8f 08 06
	[  +0.024207] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 06 8e 71 0b 76 c3 08 06
	[ +16.768103] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 45 b6 ad fe 93 08 06
	[  +5.950770] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e b5 4a 70 0a 35 08 06
	[Nov24 12:46] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e 8b d0 4a da 7e 08 06
	[  +0.000557] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e b5 4a 70 0a 35 08 06
	[  +1.903671] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 1f e8 fc 59 74 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da 14 b4 9b 3e 8f 08 06
	[ +17.535584] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 31 ec 7c 1d 38 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 45 b6 ad fe 93 08 06
	
	
	==> etcd [4bd8e7a4206461d4f24d20c12cc438ac962b12d9ce82ecc5e2dc5e9129a09771] <==
	{"level":"warn","ts":"2025-11-24T13:50:27.493350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.504350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.514003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.528447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.539609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.550452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.563049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.577200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.588749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.610484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.627562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.638193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.652196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.662742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.671657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.747215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43718","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T13:50:38.435816Z","caller":"traceutil/trace.go:172","msg":"trace[143368246] transaction","detail":"{read_only:false; response_revision:416; number_of_response:1; }","duration":"162.732007ms","start":"2025-11-24T13:50:38.273062Z","end":"2025-11-24T13:50:38.435794Z","steps":["trace[143368246] 'process raft request'  (duration: 141.527656ms)","trace[143368246] 'compare'  (duration: 21.080729ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T13:50:38.457585Z","caller":"traceutil/trace.go:172","msg":"trace[272344791] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"180.457283ms","start":"2025-11-24T13:50:38.277103Z","end":"2025-11-24T13:50:38.457560Z","steps":["trace[272344791] 'process raft request'  (duration: 180.345626ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:50:38.579842Z","caller":"traceutil/trace.go:172","msg":"trace[1746402521] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"112.043656ms","start":"2025-11-24T13:50:38.467779Z","end":"2025-11-24T13:50:38.579823Z","steps":["trace[1746402521] 'process raft request'  (duration: 105.661029ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T13:50:38.848383Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.47083ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T13:50:38.848469Z","caller":"traceutil/trace.go:172","msg":"trace[838323086] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:419; }","duration":"109.572376ms","start":"2025-11-24T13:50:38.738882Z","end":"2025-11-24T13:50:38.848455Z","steps":["trace[838323086] 'range keys from in-memory index tree'  (duration: 109.36155ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:50:38.988561Z","caller":"traceutil/trace.go:172","msg":"trace[912060111] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"127.717623ms","start":"2025-11-24T13:50:38.860824Z","end":"2025-11-24T13:50:38.988542Z","steps":["trace[912060111] 'process raft request'  (duration: 127.584654ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T13:50:39.555091Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.328292ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790234124599532 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-403602\" mod_revision:420 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-403602\" value_size:7674 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-403602\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T13:50:39.555250Z","caller":"traceutil/trace.go:172","msg":"trace[676532518] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"264.458404ms","start":"2025-11-24T13:50:39.290767Z","end":"2025-11-24T13:50:39.555226Z","steps":["trace[676532518] 'process raft request'  (duration: 133.385534ms)","trace[676532518] 'compare'  (duration: 130.225337ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T13:50:39.729226Z","caller":"traceutil/trace.go:172","msg":"trace[1287188047] transaction","detail":"{read_only:false; response_revision:423; number_of_response:1; }","duration":"164.403313ms","start":"2025-11-24T13:50:39.564801Z","end":"2025-11-24T13:50:39.729204Z","steps":["trace[1287188047] 'process raft request'  (duration: 164.154945ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:51:03 up  2:33,  0 user,  load average: 5.84, 3.77, 2.38
	Linux default-k8s-diff-port-403602 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f6e81cb7e0039c2d77ae9231708ac749db0b3501a54664a4422b80fe6132cd97] <==
	I1124 13:50:38.117362       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 13:50:38.117629       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1124 13:50:38.117797       1 main.go:148] setting mtu 1500 for CNI 
	I1124 13:50:38.117817       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 13:50:38.117845       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T13:50:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 13:50:38.514873       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 13:50:38.515061       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 13:50:38.515080       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 13:50:38.515322       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 13:50:38.815261       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 13:50:38.815292       1 metrics.go:72] Registering metrics
	I1124 13:50:38.815372       1 controller.go:711] "Syncing nftables rules"
	I1124 13:50:48.520002       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 13:50:48.520062       1 main.go:301] handling current node
	I1124 13:50:58.515773       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 13:50:58.515814       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7ea26c900e6ae058283ca6e4d01e6a00e99b9ddef722b6648e2d36f79c51ff70] <==
	I1124 13:50:28.634264       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 13:50:28.653553       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 13:50:28.669201       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:50:28.671311       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 13:50:28.683281       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:50:28.683695       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 13:50:28.805037       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 13:50:29.439441       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 13:50:29.444790       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 13:50:29.444816       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 13:50:30.164085       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 13:50:30.215863       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 13:50:30.352953       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 13:50:30.363029       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1124 13:50:30.364253       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 13:50:30.369605       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 13:50:30.513727       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 13:50:31.269677       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 13:50:31.281738       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 13:50:31.293103       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 13:50:35.618074       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:50:35.623652       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:50:36.216017       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 13:50:36.317265       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1124 13:51:01.671882       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:42632: use of closed network connection
	
	
	==> kube-controller-manager [5a610f981abf47eadee658a7ba7f122344b0884c8b6e2f884cfa26ac9a78f0f9] <==
	I1124 13:50:35.512464       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 13:50:35.512503       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 13:50:35.512634       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 13:50:35.512696       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 13:50:35.512722       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 13:50:35.512746       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 13:50:35.512763       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 13:50:35.512846       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 13:50:35.514419       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 13:50:35.514457       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 13:50:35.514557       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 13:50:35.517743       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 13:50:35.517859       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 13:50:35.518940       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 13:50:35.518972       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:50:35.521248       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 13:50:35.525686       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:50:35.546386       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 13:50:35.546499       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 13:50:35.546554       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:50:35.546560       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 13:50:35.546596       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 13:50:35.546605       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 13:50:35.555379       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-403602" podCIDRs=["10.244.0.0/24"]
	I1124 13:50:50.454037       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6c7be965ca47218c4992b0b0d378ead0c5187796feee7a212158da7490e13458] <==
	I1124 13:50:37.561968       1 server_linux.go:53] "Using iptables proxy"
	I1124 13:50:37.643060       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 13:50:37.743451       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 13:50:37.743498       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1124 13:50:37.743635       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 13:50:37.774610       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:50:37.774695       1 server_linux.go:132] "Using iptables Proxier"
	I1124 13:50:37.783065       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 13:50:37.783505       1 server.go:527] "Version info" version="v1.34.1"
	I1124 13:50:37.783871       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:50:37.789014       1 config.go:309] "Starting node config controller"
	I1124 13:50:37.789050       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 13:50:37.789059       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 13:50:37.789148       1 config.go:200] "Starting service config controller"
	I1124 13:50:37.790029       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 13:50:37.789444       1 config.go:106] "Starting endpoint slice config controller"
	I1124 13:50:37.789528       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 13:50:37.790073       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 13:50:37.790076       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 13:50:37.890220       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 13:50:37.890291       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 13:50:37.890384       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [88ce89936da6514d143d879d8103a2adbf7b6fd98ef0185abfeb68595567f529] <==
	E1124 13:50:28.580536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:50:28.580599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:50:28.581820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 13:50:28.581882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 13:50:28.581954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:50:28.582007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:50:28.582052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:50:28.582097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 13:50:28.582188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:50:28.582137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 13:50:28.582314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:50:29.434512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:50:29.508612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 13:50:29.511354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:50:29.532155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:50:29.586571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 13:50:29.679795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:50:29.708039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 13:50:29.717736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 13:50:29.726369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 13:50:29.816737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 13:50:29.838270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:50:29.871041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:50:29.975580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1124 13:50:32.463865       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 13:50:35 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:35.918382    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-403602" podStartSLOduration=5.918357391 podStartE2EDuration="5.918357391s" podCreationTimestamp="2025-11-24 13:50:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:50:32.302771015 +0000 UTC m=+1.248919441" watchObservedRunningTime="2025-11-24 13:50:35.918357391 +0000 UTC m=+4.864505824"
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:36.269021    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88d22920-c2fd-4bdf-95ec-c2f4f5c22669-xtables-lock\") pod \"kindnet-hdcbn\" (UID: \"88d22920-c2fd-4bdf-95ec-c2f4f5c22669\") " pod="kube-system/kindnet-hdcbn"
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:36.269294    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88d22920-c2fd-4bdf-95ec-c2f4f5c22669-lib-modules\") pod \"kindnet-hdcbn\" (UID: \"88d22920-c2fd-4bdf-95ec-c2f4f5c22669\") " pod="kube-system/kindnet-hdcbn"
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:36.269329    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/88d22920-c2fd-4bdf-95ec-c2f4f5c22669-cni-cfg\") pod \"kindnet-hdcbn\" (UID: \"88d22920-c2fd-4bdf-95ec-c2f4f5c22669\") " pod="kube-system/kindnet-hdcbn"
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:36.269462    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn9bc\" (UniqueName: \"kubernetes.io/projected/88d22920-c2fd-4bdf-95ec-c2f4f5c22669-kube-api-access-cn9bc\") pod \"kindnet-hdcbn\" (UID: \"88d22920-c2fd-4bdf-95ec-c2f4f5c22669\") " pod="kube-system/kindnet-hdcbn"
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:36.269532    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a8814197-f505-433e-a55d-b0106f40e505-kube-proxy\") pod \"kube-proxy-fhwvd\" (UID: \"a8814197-f505-433e-a55d-b0106f40e505\") " pod="kube-system/kube-proxy-fhwvd"
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:36.269575    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8814197-f505-433e-a55d-b0106f40e505-lib-modules\") pod \"kube-proxy-fhwvd\" (UID: \"a8814197-f505-433e-a55d-b0106f40e505\") " pod="kube-system/kube-proxy-fhwvd"
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:36.269605    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8814197-f505-433e-a55d-b0106f40e505-xtables-lock\") pod \"kube-proxy-fhwvd\" (UID: \"a8814197-f505-433e-a55d-b0106f40e505\") " pod="kube-system/kube-proxy-fhwvd"
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:36.269663    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86jz6\" (UniqueName: \"kubernetes.io/projected/a8814197-f505-433e-a55d-b0106f40e505-kube-api-access-86jz6\") pod \"kube-proxy-fhwvd\" (UID: \"a8814197-f505-433e-a55d-b0106f40e505\") " pod="kube-system/kube-proxy-fhwvd"
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: E1124 13:50:36.385871    1452 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: E1124 13:50:36.385958    1452 projected.go:196] Error preparing data for projected volume kube-api-access-86jz6 for pod kube-system/kube-proxy-fhwvd: configmap "kube-root-ca.crt" not found
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: E1124 13:50:36.386181    1452 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8814197-f505-433e-a55d-b0106f40e505-kube-api-access-86jz6 podName:a8814197-f505-433e-a55d-b0106f40e505 nodeName:}" failed. No retries permitted until 2025-11-24 13:50:36.886143115 +0000 UTC m=+5.832291548 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-86jz6" (UniqueName: "kubernetes.io/projected/a8814197-f505-433e-a55d-b0106f40e505-kube-api-access-86jz6") pod "kube-proxy-fhwvd" (UID: "a8814197-f505-433e-a55d-b0106f40e505") : configmap "kube-root-ca.crt" not found
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: E1124 13:50:36.388456    1452 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: E1124 13:50:36.388503    1452 projected.go:196] Error preparing data for projected volume kube-api-access-cn9bc for pod kube-system/kindnet-hdcbn: configmap "kube-root-ca.crt" not found
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: E1124 13:50:36.388630    1452 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/88d22920-c2fd-4bdf-95ec-c2f4f5c22669-kube-api-access-cn9bc podName:88d22920-c2fd-4bdf-95ec-c2f4f5c22669 nodeName:}" failed. No retries permitted until 2025-11-24 13:50:36.888594418 +0000 UTC m=+5.834742848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cn9bc" (UniqueName: "kubernetes.io/projected/88d22920-c2fd-4bdf-95ec-c2f4f5c22669-kube-api-access-cn9bc") pod "kindnet-hdcbn" (UID: "88d22920-c2fd-4bdf-95ec-c2f4f5c22669") : configmap "kube-root-ca.crt" not found
	Nov 24 13:50:38 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:38.269127    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-hdcbn" podStartSLOduration=2.269103119 podStartE2EDuration="2.269103119s" podCreationTimestamp="2025-11-24 13:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:50:38.268147344 +0000 UTC m=+7.214295789" watchObservedRunningTime="2025-11-24 13:50:38.269103119 +0000 UTC m=+7.215251552"
	Nov 24 13:50:38 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:38.581632    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fhwvd" podStartSLOduration=2.581605439 podStartE2EDuration="2.581605439s" podCreationTimestamp="2025-11-24 13:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:50:38.46004843 +0000 UTC m=+7.406196863" watchObservedRunningTime="2025-11-24 13:50:38.581605439 +0000 UTC m=+7.527753872"
	Nov 24 13:50:48 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:48.619268    1452 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 13:50:48 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:48.762437    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m528s\" (UniqueName: \"kubernetes.io/projected/649238f9-bcbc-4569-bff7-9488834e21c8-kube-api-access-m528s\") pod \"storage-provisioner\" (UID: \"649238f9-bcbc-4569-bff7-9488834e21c8\") " pod="kube-system/storage-provisioner"
	Nov 24 13:50:48 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:48.762511    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv67j\" (UniqueName: \"kubernetes.io/projected/f86f95a0-9e92-429a-9dd7-76843d8d6af1-kube-api-access-fv67j\") pod \"coredns-66bc5c9577-hrj7f\" (UID: \"f86f95a0-9e92-429a-9dd7-76843d8d6af1\") " pod="kube-system/coredns-66bc5c9577-hrj7f"
	Nov 24 13:50:48 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:48.762565    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f86f95a0-9e92-429a-9dd7-76843d8d6af1-config-volume\") pod \"coredns-66bc5c9577-hrj7f\" (UID: \"f86f95a0-9e92-429a-9dd7-76843d8d6af1\") " pod="kube-system/coredns-66bc5c9577-hrj7f"
	Nov 24 13:50:48 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:48.762613    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/649238f9-bcbc-4569-bff7-9488834e21c8-tmp\") pod \"storage-provisioner\" (UID: \"649238f9-bcbc-4569-bff7-9488834e21c8\") " pod="kube-system/storage-provisioner"
	Nov 24 13:50:50 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:50.293248    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-hrj7f" podStartSLOduration=14.293222238 podStartE2EDuration="14.293222238s" podCreationTimestamp="2025-11-24 13:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:50:50.292466414 +0000 UTC m=+19.238614913" watchObservedRunningTime="2025-11-24 13:50:50.293222238 +0000 UTC m=+19.239370675"
	Nov 24 13:50:50 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:50.312367    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.312254715 podStartE2EDuration="13.312254715s" podCreationTimestamp="2025-11-24 13:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:50:50.308792069 +0000 UTC m=+19.254940501" watchObservedRunningTime="2025-11-24 13:50:50.312254715 +0000 UTC m=+19.258403150"
	Nov 24 13:50:52 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:52.587702    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv67r\" (UniqueName: \"kubernetes.io/projected/6d25c78c-49dd-42e4-ba09-01c98b5c9084-kube-api-access-gv67r\") pod \"busybox\" (UID: \"6d25c78c-49dd-42e4-ba09-01c98b5c9084\") " pod="default/busybox"
	
	
	==> storage-provisioner [d1388e232ef6e2ad16cd2fbe73c16bd9d1d16e11c9449b5c1ba02959d4f60694] <==
	I1124 13:50:49.343752       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 13:50:49.361696       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 13:50:49.361758       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 13:50:49.371372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:49.385381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 13:50:49.385635       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 13:50:49.385909       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-403602_870f82d2-7450-46e7-b233-caa243111756!
	I1124 13:50:49.387036       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"245e233b-0e09-4e94-bc5c-af1b2abac362", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-403602_870f82d2-7450-46e7-b233-caa243111756 became leader
	W1124 13:50:49.393522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:49.406228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 13:50:49.486397       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-403602_870f82d2-7450-46e7-b233-caa243111756!
	W1124 13:50:51.412072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:51.420614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:53.425342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:53.431574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:55.437386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:55.448112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:57.452240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:57.458598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:59.463817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:59.471790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:51:01.475485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:51:01.480517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:51:03.484635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:51:03.558247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-403602 -n default-k8s-diff-port-403602
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-403602 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-403602
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-403602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d7d8b69e2810bd57f33d616c47baac6fc9f36689058868f9b0704648eb21908a",
	        "Created": "2025-11-24T13:50:06.558943808Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 641033,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:50:07.877101179Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/d7d8b69e2810bd57f33d616c47baac6fc9f36689058868f9b0704648eb21908a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d7d8b69e2810bd57f33d616c47baac6fc9f36689058868f9b0704648eb21908a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d7d8b69e2810bd57f33d616c47baac6fc9f36689058868f9b0704648eb21908a/hosts",
	        "LogPath": "/var/lib/docker/containers/d7d8b69e2810bd57f33d616c47baac6fc9f36689058868f9b0704648eb21908a/d7d8b69e2810bd57f33d616c47baac6fc9f36689058868f9b0704648eb21908a-json.log",
	        "Name": "/default-k8s-diff-port-403602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-403602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-403602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d7d8b69e2810bd57f33d616c47baac6fc9f36689058868f9b0704648eb21908a",
	                "LowerDir": "/var/lib/docker/overlay2/a78bf6dc2cc9390c44b67aeb62618f001c2be55055b5ab9820ec6a5547228d39-init/diff:/var/lib/docker/overlay2/0f013e03fd0eaee4efc608fb0376e7d3e8ba628388f5191310c2259ab273ad26/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a78bf6dc2cc9390c44b67aeb62618f001c2be55055b5ab9820ec6a5547228d39/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a78bf6dc2cc9390c44b67aeb62618f001c2be55055b5ab9820ec6a5547228d39/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a78bf6dc2cc9390c44b67aeb62618f001c2be55055b5ab9820ec6a5547228d39/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-403602",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-403602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-403602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-403602",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-403602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7ea97673605424b7cc9f139fc9f57f7d973548c311822abe4a54c35fef0f20de",
	            "SandboxKey": "/var/run/docker/netns/7ea976736054",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-403602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "23536668cac4e11b8ee87fcaff8957af4fe5e5db7db5467659c7658d1fa2205c",
	                    "EndpointID": "c8beb84479653474f72f2d17bcbcdcbbdf55d01c93286b741881afb423c8b070",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "82:94:76:cb:60:5a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-403602",
	                        "d7d8b69e2810"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-403602 -n default-k8s-diff-port-403602
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-403602 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-403602 logs -n 25: (1.157776726s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ image   │ no-preload-608395 image list --format=json                                                                                                                                                                                                          │ no-preload-608395            │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ pause   │ -p no-preload-608395 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-608395            │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ unpause │ -p no-preload-608395 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-608395            │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:49 UTC │
	│ delete  │ -p no-preload-608395                                                                                                                                                                                                                                │ no-preload-608395            │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:50 UTC │
	│ delete  │ -p cert-expiration-099863                                                                                                                                                                                                                           │ cert-expiration-099863       │ jenkins │ v1.37.0 │ 24 Nov 25 13:49 UTC │ 24 Nov 25 13:50 UTC │
	│ delete  │ -p no-preload-608395                                                                                                                                                                                                                                │ no-preload-608395            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ delete  │ -p disable-driver-mounts-312087                                                                                                                                                                                                                     │ disable-driver-mounts-312087 │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ start   │ -p default-k8s-diff-port-403602 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-403602 │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ start   │ -p newest-cni-846862 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ start   │ -p kubernetes-upgrade-358357 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-358357    │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │                     │
	│ start   │ -p kubernetes-upgrade-358357 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-358357    │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ delete  │ -p kubernetes-upgrade-358357                                                                                                                                                                                                                        │ kubernetes-upgrade-358357    │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ start   │ -p auto-355661 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-355661                  │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-846862 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ stop    │ -p newest-cni-846862 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ addons  │ enable dashboard -p newest-cni-846862 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ start   │ -p newest-cni-846862 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ image   │ newest-cni-846862 image list --format=json                                                                                                                                                                                                          │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ addons  │ enable metrics-server -p embed-certs-971503 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-971503           │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ pause   │ -p newest-cni-846862 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ stop    │ -p embed-certs-971503 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-971503           │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │                     │
	│ unpause │ -p newest-cni-846862 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ delete  │ -p newest-cni-846862                                                                                                                                                                                                                                │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ delete  │ -p newest-cni-846862                                                                                                                                                                                                                                │ newest-cni-846862            │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │ 24 Nov 25 13:50 UTC │
	│ start   │ -p kindnet-355661 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd                                                                                                      │ kindnet-355661               │ jenkins │ v1.37.0 │ 24 Nov 25 13:50 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:50:59
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:50:59.581214  658791 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:50:59.581490  658791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:50:59.581502  658791 out.go:374] Setting ErrFile to fd 2...
	I1124 13:50:59.581507  658791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:50:59.581745  658791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
	I1124 13:50:59.582319  658791 out.go:368] Setting JSON to false
	I1124 13:50:59.583628  658791 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9199,"bootTime":1763983061,"procs":368,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:50:59.583694  658791 start.go:143] virtualization: kvm guest
	I1124 13:50:59.585751  658791 out.go:179] * [kindnet-355661] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:50:59.587217  658791 notify.go:221] Checking for updates...
	I1124 13:50:59.587257  658791 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:50:59.588966  658791 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:50:59.590462  658791 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:50:59.591787  658791 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-370498/.minikube
	I1124 13:50:59.593331  658791 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:50:59.594690  658791 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:50:59.596774  658791 config.go:182] Loaded profile config "auto-355661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:50:59.596927  658791 config.go:182] Loaded profile config "default-k8s-diff-port-403602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:50:59.597213  658791 config.go:182] Loaded profile config "embed-certs-971503": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:50:59.597428  658791 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:50:59.628024  658791 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:50:59.628134  658791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:50:59.692021  658791 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-24 13:50:59.680841906 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:50:59.692135  658791 docker.go:319] overlay module found
	I1124 13:50:59.694156  658791 out.go:179] * Using the docker driver based on user configuration
	I1124 13:50:59.695564  658791 start.go:309] selected driver: docker
	I1124 13:50:59.695580  658791 start.go:927] validating driver "docker" against <nil>
	I1124 13:50:59.695596  658791 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:50:59.696316  658791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:50:59.757527  658791 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-24 13:50:59.74777331 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:50:59.757701  658791 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:50:59.758029  658791 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:50:59.759768  658791 out.go:179] * Using Docker driver with root privileges
	I1124 13:50:59.761206  658791 cni.go:84] Creating CNI manager for "kindnet"
	I1124 13:50:59.761229  658791 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:50:59.761311  658791 start.go:353] cluster config:
	{Name:kindnet-355661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-355661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:50:59.762848  658791 out.go:179] * Starting "kindnet-355661" primary control-plane node in "kindnet-355661" cluster
	I1124 13:50:59.764343  658791 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 13:50:59.766041  658791 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:50:59.767390  658791 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 13:50:59.767431  658791 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1124 13:50:59.767441  658791 cache.go:65] Caching tarball of preloaded images
	I1124 13:50:59.767474  658791 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:50:59.767535  658791 preload.go:238] Found /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1124 13:50:59.767546  658791 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1124 13:50:59.767637  658791 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/kindnet-355661/config.json ...
	I1124 13:50:59.767676  658791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/kindnet-355661/config.json: {Name:mkacc2c260866bd710df10f4c0d3b61b71d60887 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:50:59.791299  658791 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 13:50:59.791322  658791 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 13:50:59.791340  658791 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:50:59.791385  658791 start.go:360] acquireMachinesLock for kindnet-355661: {Name:mkba4e34a9cc28606819724c72ec84b43ff60956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:50:59.791519  658791 start.go:364] duration metric: took 93.767µs to acquireMachinesLock for "kindnet-355661"
	I1124 13:50:59.791551  658791 start.go:93] Provisioning new machine with config: &{Name:kindnet-355661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-355661 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 13:50:59.791624  658791 start.go:125] createHost starting for "" (driver="docker")
	I1124 13:50:59.624873  648989 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 13:51:00.197245  648989 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 13:51:00.198703  648989 kubeadm.go:319] 
	I1124 13:51:00.198832  648989 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 13:51:00.198849  648989 kubeadm.go:319] 
	I1124 13:51:00.198982  648989 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 13:51:00.198996  648989 kubeadm.go:319] 
	I1124 13:51:00.199078  648989 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 13:51:00.199173  648989 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 13:51:00.199255  648989 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 13:51:00.199271  648989 kubeadm.go:319] 
	I1124 13:51:00.199357  648989 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 13:51:00.199369  648989 kubeadm.go:319] 
	I1124 13:51:00.199446  648989 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 13:51:00.199459  648989 kubeadm.go:319] 
	I1124 13:51:00.199589  648989 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 13:51:00.199707  648989 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 13:51:00.199837  648989 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 13:51:00.199855  648989 kubeadm.go:319] 
	I1124 13:51:00.200136  648989 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 13:51:00.200313  648989 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 13:51:00.200325  648989 kubeadm.go:319] 
	I1124 13:51:00.200435  648989 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jld3bc.in00pmtgg8e3apf8 \
	I1124 13:51:00.200586  648989 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:32fb1839a00503b33822b75b81c2f42d5061d18404c0a5cd12189dec7e20658c \
	I1124 13:51:00.200622  648989 kubeadm.go:319] 	--control-plane 
	I1124 13:51:00.200628  648989 kubeadm.go:319] 
	I1124 13:51:00.200730  648989 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 13:51:00.200736  648989 kubeadm.go:319] 
	I1124 13:51:00.200843  648989 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jld3bc.in00pmtgg8e3apf8 \
	I1124 13:51:00.200998  648989 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:32fb1839a00503b33822b75b81c2f42d5061d18404c0a5cd12189dec7e20658c 
	I1124 13:51:00.204454  648989 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 13:51:00.204633  648989 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 13:51:00.204673  648989 cni.go:84] Creating CNI manager for ""
	I1124 13:51:00.204683  648989 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:51:00.207312  648989 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 13:51:00.208845  648989 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 13:51:00.214942  648989 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 13:51:00.214967  648989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 13:51:00.235283  648989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 13:51:00.507251  648989 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 13:51:00.507355  648989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:51:00.507428  648989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-355661 minikube.k8s.io/updated_at=2025_11_24T13_51_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=auto-355661 minikube.k8s.io/primary=true
	I1124 13:51:00.637113  648989 ops.go:34] apiserver oom_adj: -16
	I1124 13:51:00.637269  648989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:51:01.137620  648989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:51:01.638164  648989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:51:02.137901  648989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:51:02.637578  648989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:51:03.137337  648989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:51:03.637640  648989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:51:04.137516  648989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:50:59.793820  658791 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 13:50:59.794144  658791 start.go:159] libmachine.API.Create for "kindnet-355661" (driver="docker")
	I1124 13:50:59.794198  658791 client.go:173] LocalClient.Create starting
	I1124 13:50:59.794417  658791 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-370498/.minikube/certs/ca.pem
	I1124 13:50:59.794472  658791 main.go:143] libmachine: Decoding PEM data...
	I1124 13:50:59.794505  658791 main.go:143] libmachine: Parsing certificate...
	I1124 13:50:59.794588  658791 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-370498/.minikube/certs/cert.pem
	I1124 13:50:59.794617  658791 main.go:143] libmachine: Decoding PEM data...
	I1124 13:50:59.794633  658791 main.go:143] libmachine: Parsing certificate...
	I1124 13:50:59.795073  658791 cli_runner.go:164] Run: docker network inspect kindnet-355661 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 13:50:59.813452  658791 cli_runner.go:211] docker network inspect kindnet-355661 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 13:50:59.813578  658791 network_create.go:284] running [docker network inspect kindnet-355661] to gather additional debugging logs...
	I1124 13:50:59.813605  658791 cli_runner.go:164] Run: docker network inspect kindnet-355661
	W1124 13:50:59.831653  658791 cli_runner.go:211] docker network inspect kindnet-355661 returned with exit code 1
	I1124 13:50:59.831686  658791 network_create.go:287] error running [docker network inspect kindnet-355661]: docker network inspect kindnet-355661: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-355661 not found
	I1124 13:50:59.831698  658791 network_create.go:289] output of [docker network inspect kindnet-355661]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-355661 not found
	
	** /stderr **
	I1124 13:50:59.831813  658791 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:50:59.850557  658791 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8afb578efdfa IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:5e:46:43:aa:fe} reservation:<nil>}
	I1124 13:50:59.851316  658791 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ca3a55f53176 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:98:62:4c:91:8f} reservation:<nil>}
	I1124 13:50:59.851767  658791 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e11236ccf9ba IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:36:3b:80:be:95:34} reservation:<nil>}
	I1124 13:50:59.852398  658791 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9372931e5ccb IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a6:ff:71:22:6a:68} reservation:<nil>}
	I1124 13:50:59.853281  658791 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ee6a60}
	I1124 13:50:59.853305  658791 network_create.go:124] attempt to create docker network kindnet-355661 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1124 13:50:59.853367  658791 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-355661 kindnet-355661
	I1124 13:50:59.906460  658791 network_create.go:108] docker network kindnet-355661 192.168.85.0/24 created
	I1124 13:50:59.906497  658791 kic.go:121] calculated static IP "192.168.85.2" for the "kindnet-355661" container
	I1124 13:50:59.906568  658791 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 13:50:59.925589  658791 cli_runner.go:164] Run: docker volume create kindnet-355661 --label name.minikube.sigs.k8s.io=kindnet-355661 --label created_by.minikube.sigs.k8s.io=true
	I1124 13:50:59.946685  658791 oci.go:103] Successfully created a docker volume kindnet-355661
	I1124 13:50:59.946782  658791 cli_runner.go:164] Run: docker run --rm --name kindnet-355661-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-355661 --entrypoint /usr/bin/test -v kindnet-355661:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 13:51:00.396854  658791 oci.go:107] Successfully prepared a docker volume kindnet-355661
	I1124 13:51:00.396953  658791 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 13:51:00.396964  658791 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 13:51:00.397031  658791 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-355661:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 13:51:04.638242  648989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:51:05.138133  648989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:51:05.227498  648989 kubeadm.go:1114] duration metric: took 4.720194342s to wait for elevateKubeSystemPrivileges
	I1124 13:51:05.227536  648989 kubeadm.go:403] duration metric: took 17.824338124s to StartCluster
	I1124 13:51:05.227560  648989 settings.go:142] acquiring lock: {Name:mka599a3c9bae62ffb84d261186583052ce40f68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:51:05.227642  648989 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:51:05.229621  648989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-370498/kubeconfig: {Name:mk44e8f04ffd8592063c19ad1e339ad14aaa66a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:51:05.229897  648989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 13:51:05.229896  648989 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 13:51:05.230029  648989 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 13:51:05.230141  648989 addons.go:70] Setting storage-provisioner=true in profile "auto-355661"
	I1124 13:51:05.230163  648989 addons.go:239] Setting addon storage-provisioner=true in "auto-355661"
	I1124 13:51:05.230163  648989 config.go:182] Loaded profile config "auto-355661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:51:05.230160  648989 addons.go:70] Setting default-storageclass=true in profile "auto-355661"
	I1124 13:51:05.230203  648989 host.go:66] Checking if "auto-355661" exists ...
	I1124 13:51:05.230204  648989 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-355661"
	I1124 13:51:05.230581  648989 cli_runner.go:164] Run: docker container inspect auto-355661 --format={{.State.Status}}
	I1124 13:51:05.230824  648989 cli_runner.go:164] Run: docker container inspect auto-355661 --format={{.State.Status}}
	I1124 13:51:05.234106  648989 out.go:179] * Verifying Kubernetes components...
	I1124 13:51:05.236294  648989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:51:05.259623  648989 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:51:05.259801  648989 addons.go:239] Setting addon default-storageclass=true in "auto-355661"
	I1124 13:51:05.259850  648989 host.go:66] Checking if "auto-355661" exists ...
	I1124 13:51:05.260394  648989 cli_runner.go:164] Run: docker container inspect auto-355661 --format={{.State.Status}}
	I1124 13:51:05.261145  648989 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:51:05.261165  648989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 13:51:05.261216  648989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-355661
	I1124 13:51:05.297349  648989 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 13:51:05.297384  648989 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 13:51:05.297449  648989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-355661
	I1124 13:51:05.302159  648989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/auto-355661/id_rsa Username:docker}
	I1124 13:51:05.323106  648989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/auto-355661/id_rsa Username:docker}
	I1124 13:51:05.355068  648989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 13:51:05.428254  648989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:51:05.450121  648989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:51:05.487850  648989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 13:51:05.641316  648989 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 13:51:05.644790  648989 node_ready.go:35] waiting up to 15m0s for node "auto-355661" to be "Ready" ...
	I1124 13:51:05.934341  648989 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	48d0cffedb5b1       56cc512116c8f       11 seconds ago      Running             busybox                   0                   2f7207971706b       busybox                                                default
	6bef39517b695       52546a367cc9e       17 seconds ago      Running             coredns                   0                   3de73b9a05408       coredns-66bc5c9577-hrj7f                               kube-system
	d1388e232ef6e       6e38f40d628db       17 seconds ago      Running             storage-provisioner       0                   305e61c0ce3bf       storage-provisioner                                    kube-system
	f6e81cb7e0039       409467f978b4a       29 seconds ago      Running             kindnet-cni               0                   8585149c57f4f       kindnet-hdcbn                                          kube-system
	6c7be965ca472       fc25172553d79       29 seconds ago      Running             kube-proxy                0                   0463a70bc1fcf       kube-proxy-fhwvd                                       kube-system
	88ce89936da65       7dd6aaa1717ab       41 seconds ago      Running             kube-scheduler            0                   83ec108129f18       kube-scheduler-default-k8s-diff-port-403602            kube-system
	4bd8e7a420646       5f1f5298c888d       41 seconds ago      Running             etcd                      0                   aee3f29e2e81c       etcd-default-k8s-diff-port-403602                      kube-system
	7ea26c900e6ae       c3994bc696102       41 seconds ago      Running             kube-apiserver            0                   cc317a0a62088       kube-apiserver-default-k8s-diff-port-403602            kube-system
	5a610f981abf4       c80c8dbafe7dd       41 seconds ago      Running             kube-controller-manager   0                   e0f657fa33167       kube-controller-manager-default-k8s-diff-port-403602   kube-system
	
	
	==> containerd <==
	Nov 24 13:50:49 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:49.157865026Z" level=info msg="connecting to shim d1388e232ef6e2ad16cd2fbe73c16bd9d1d16e11c9449b5c1ba02959d4f60694" address="unix:///run/containerd/s/6d2de559c65fdf680403157d4482e1d177495e390a7c7883fde152a8dd9475ae" protocol=ttrpc version=3
	Nov 24 13:50:49 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:49.187509762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hrj7f,Uid:f86f95a0-9e92-429a-9dd7-76843d8d6af1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3de73b9a05408da56af3e2a664ca600f3c0e07e943e1bf972db36cbada943b6d\""
	Nov 24 13:50:49 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:49.205753414Z" level=info msg="CreateContainer within sandbox \"3de73b9a05408da56af3e2a664ca600f3c0e07e943e1bf972db36cbada943b6d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 24 13:50:49 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:49.223754475Z" level=info msg="Container 6bef39517b6954efbeb122d447973d5e570099e947f0d694a89649b24f0a848c: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 13:50:49 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:49.242500116Z" level=info msg="CreateContainer within sandbox \"3de73b9a05408da56af3e2a664ca600f3c0e07e943e1bf972db36cbada943b6d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6bef39517b6954efbeb122d447973d5e570099e947f0d694a89649b24f0a848c\""
	Nov 24 13:50:49 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:49.251543617Z" level=info msg="StartContainer for \"6bef39517b6954efbeb122d447973d5e570099e947f0d694a89649b24f0a848c\""
	Nov 24 13:50:49 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:49.255647310Z" level=info msg="connecting to shim 6bef39517b6954efbeb122d447973d5e570099e947f0d694a89649b24f0a848c" address="unix:///run/containerd/s/c756f4a6064bf6d3d6001f431f8d6fcd6609f2410b96cd654c4174a12947bdb1" protocol=ttrpc version=3
	Nov 24 13:50:49 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:49.321474357Z" level=info msg="StartContainer for \"d1388e232ef6e2ad16cd2fbe73c16bd9d1d16e11c9449b5c1ba02959d4f60694\" returns successfully"
	Nov 24 13:50:49 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:49.367004508Z" level=info msg="StartContainer for \"6bef39517b6954efbeb122d447973d5e570099e947f0d694a89649b24f0a848c\" returns successfully"
	Nov 24 13:50:52 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:52.864012405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:6d25c78c-49dd-42e4-ba09-01c98b5c9084,Namespace:default,Attempt:0,}"
	Nov 24 13:50:52 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:52.906177098Z" level=info msg="connecting to shim 2f7207971706bad71c19566cbd05c87a67b07164a6d6c9fd08c272c20f1b4d31" address="unix:///run/containerd/s/01cd4962d5bcb0efe27b71fd22ac01f9cf4e8fc92cae96dab21bb1cd24e148a3" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 13:50:53 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:53.010236585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:6d25c78c-49dd-42e4-ba09-01c98b5c9084,Namespace:default,Attempt:0,} returns sandbox id \"2f7207971706bad71c19566cbd05c87a67b07164a6d6c9fd08c272c20f1b4d31\""
	Nov 24 13:50:53 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:53.017264743Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.107126140Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.107624899Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396645"
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.108886187Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.112103176Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.112844487Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.095384704s"
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.112896963Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.119266267Z" level=info msg="CreateContainer within sandbox \"2f7207971706bad71c19566cbd05c87a67b07164a6d6c9fd08c272c20f1b4d31\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.130979458Z" level=info msg="Container 48d0cffedb5b1f2691c8a34b1f5f72bcec35ea4209867f9086fc28d2eddd7e53: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.139423038Z" level=info msg="CreateContainer within sandbox \"2f7207971706bad71c19566cbd05c87a67b07164a6d6c9fd08c272c20f1b4d31\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"48d0cffedb5b1f2691c8a34b1f5f72bcec35ea4209867f9086fc28d2eddd7e53\""
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.141772445Z" level=info msg="StartContainer for \"48d0cffedb5b1f2691c8a34b1f5f72bcec35ea4209867f9086fc28d2eddd7e53\""
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.142850063Z" level=info msg="connecting to shim 48d0cffedb5b1f2691c8a34b1f5f72bcec35ea4209867f9086fc28d2eddd7e53" address="unix:///run/containerd/s/01cd4962d5bcb0efe27b71fd22ac01f9cf4e8fc92cae96dab21bb1cd24e148a3" protocol=ttrpc version=3
	Nov 24 13:50:55 default-k8s-diff-port-403602 containerd[663]: time="2025-11-24T13:50:55.221668184Z" level=info msg="StartContainer for \"48d0cffedb5b1f2691c8a34b1f5f72bcec35ea4209867f9086fc28d2eddd7e53\" returns successfully"
	
	
	==> coredns [6bef39517b6954efbeb122d447973d5e570099e947f0d694a89649b24f0a848c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60306 - 9022 "HINFO IN 4062326778416799386.5406779658318267711. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.078824025s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-403602
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-403602
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=default-k8s-diff-port-403602
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_50_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:50:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-403602
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:51:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:51:01 +0000   Mon, 24 Nov 2025 13:50:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:51:01 +0000   Mon, 24 Nov 2025 13:50:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:51:01 +0000   Mon, 24 Nov 2025 13:50:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:51:01 +0000   Mon, 24 Nov 2025 13:50:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-403602
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                cd5662fa-7882-4163-8a73-93b2c89779bf
	  Boot ID:                    715d4626-373f-499b-b5de-b6d832ce4fe4
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  kube-system                 coredns-66bc5c9577-hrj7f                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     31s
	  kube-system                 etcd-default-k8s-diff-port-403602                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-hdcbn                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-default-k8s-diff-port-403602             250m (3%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-403602    200m (2%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-fhwvd                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-default-k8s-diff-port-403602             100m (1%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29s                kube-proxy       
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s (x8 over 43s)  kubelet          Node default-k8s-diff-port-403602 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node default-k8s-diff-port-403602 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x7 over 43s)  kubelet          Node default-k8s-diff-port-403602 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  43s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  36s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  36s                kubelet          Node default-k8s-diff-port-403602 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s                kubelet          Node default-k8s-diff-port-403602 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s                kubelet          Node default-k8s-diff-port-403602 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s                node-controller  Node default-k8s-diff-port-403602 event: Registered Node default-k8s-diff-port-403602 in Controller
	  Normal  NodeReady                19s                kubelet          Node default-k8s-diff-port-403602 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a 91 30 bc 58 af 08 06
	[Nov24 12:45] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a fb 84 7d 9e 9e 08 06
	[  +0.000332] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0a 91 30 bc 58 af 08 06
	[ +25.292047] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff da 14 b4 9b 3e 8f 08 06
	[  +0.024207] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 06 8e 71 0b 76 c3 08 06
	[ +16.768103] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 45 b6 ad fe 93 08 06
	[  +5.950770] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e b5 4a 70 0a 35 08 06
	[Nov24 12:46] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e 8b d0 4a da 7e 08 06
	[  +0.000557] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e b5 4a 70 0a 35 08 06
	[  +1.903671] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 1f e8 fc 59 74 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da 14 b4 9b 3e 8f 08 06
	[ +17.535584] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 31 ec 7c 1d 38 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 45 b6 ad fe 93 08 06
	
	
	==> etcd [4bd8e7a4206461d4f24d20c12cc438ac962b12d9ce82ecc5e2dc5e9129a09771] <==
	{"level":"warn","ts":"2025-11-24T13:50:27.493350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.504350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.514003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.528447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.539609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.550452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.563049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.577200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.588749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.610484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.627562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.638193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.652196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.662742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.671657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:50:27.747215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43718","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T13:50:38.435816Z","caller":"traceutil/trace.go:172","msg":"trace[143368246] transaction","detail":"{read_only:false; response_revision:416; number_of_response:1; }","duration":"162.732007ms","start":"2025-11-24T13:50:38.273062Z","end":"2025-11-24T13:50:38.435794Z","steps":["trace[143368246] 'process raft request'  (duration: 141.527656ms)","trace[143368246] 'compare'  (duration: 21.080729ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T13:50:38.457585Z","caller":"traceutil/trace.go:172","msg":"trace[272344791] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"180.457283ms","start":"2025-11-24T13:50:38.277103Z","end":"2025-11-24T13:50:38.457560Z","steps":["trace[272344791] 'process raft request'  (duration: 180.345626ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:50:38.579842Z","caller":"traceutil/trace.go:172","msg":"trace[1746402521] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"112.043656ms","start":"2025-11-24T13:50:38.467779Z","end":"2025-11-24T13:50:38.579823Z","steps":["trace[1746402521] 'process raft request'  (duration: 105.661029ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T13:50:38.848383Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.47083ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T13:50:38.848469Z","caller":"traceutil/trace.go:172","msg":"trace[838323086] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:419; }","duration":"109.572376ms","start":"2025-11-24T13:50:38.738882Z","end":"2025-11-24T13:50:38.848455Z","steps":["trace[838323086] 'range keys from in-memory index tree'  (duration: 109.36155ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:50:38.988561Z","caller":"traceutil/trace.go:172","msg":"trace[912060111] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"127.717623ms","start":"2025-11-24T13:50:38.860824Z","end":"2025-11-24T13:50:38.988542Z","steps":["trace[912060111] 'process raft request'  (duration: 127.584654ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T13:50:39.555091Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.328292ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790234124599532 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-403602\" mod_revision:420 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-403602\" value_size:7674 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-403602\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T13:50:39.555250Z","caller":"traceutil/trace.go:172","msg":"trace[676532518] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"264.458404ms","start":"2025-11-24T13:50:39.290767Z","end":"2025-11-24T13:50:39.555226Z","steps":["trace[676532518] 'process raft request'  (duration: 133.385534ms)","trace[676532518] 'compare'  (duration: 130.225337ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T13:50:39.729226Z","caller":"traceutil/trace.go:172","msg":"trace[1287188047] transaction","detail":"{read_only:false; response_revision:423; number_of_response:1; }","duration":"164.403313ms","start":"2025-11-24T13:50:39.564801Z","end":"2025-11-24T13:50:39.729204Z","steps":["trace[1287188047] 'process raft request'  (duration: 164.154945ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:51:07 up  2:33,  0 user,  load average: 6.09, 3.85, 2.42
	Linux default-k8s-diff-port-403602 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f6e81cb7e0039c2d77ae9231708ac749db0b3501a54664a4422b80fe6132cd97] <==
	I1124 13:50:38.117362       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 13:50:38.117629       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1124 13:50:38.117797       1 main.go:148] setting mtu 1500 for CNI 
	I1124 13:50:38.117817       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 13:50:38.117845       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T13:50:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 13:50:38.514873       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 13:50:38.515061       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 13:50:38.515080       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 13:50:38.515322       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 13:50:38.815261       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 13:50:38.815292       1 metrics.go:72] Registering metrics
	I1124 13:50:38.815372       1 controller.go:711] "Syncing nftables rules"
	I1124 13:50:48.520002       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 13:50:48.520062       1 main.go:301] handling current node
	I1124 13:50:58.515773       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 13:50:58.515814       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7ea26c900e6ae058283ca6e4d01e6a00e99b9ddef722b6648e2d36f79c51ff70] <==
	I1124 13:50:28.634264       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 13:50:28.653553       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 13:50:28.669201       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:50:28.671311       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 13:50:28.683281       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:50:28.683695       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 13:50:28.805037       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 13:50:29.439441       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 13:50:29.444790       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 13:50:29.444816       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 13:50:30.164085       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 13:50:30.215863       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 13:50:30.352953       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 13:50:30.363029       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1124 13:50:30.364253       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 13:50:30.369605       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 13:50:30.513727       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 13:50:31.269677       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 13:50:31.281738       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 13:50:31.293103       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 13:50:35.618074       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:50:35.623652       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:50:36.216017       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 13:50:36.317265       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1124 13:51:01.671882       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:42632: use of closed network connection
	
	
	==> kube-controller-manager [5a610f981abf47eadee658a7ba7f122344b0884c8b6e2f884cfa26ac9a78f0f9] <==
	I1124 13:50:35.512464       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 13:50:35.512503       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 13:50:35.512634       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 13:50:35.512696       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 13:50:35.512722       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 13:50:35.512746       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 13:50:35.512763       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 13:50:35.512846       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 13:50:35.514419       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 13:50:35.514457       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 13:50:35.514557       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 13:50:35.517743       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 13:50:35.517859       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 13:50:35.518940       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 13:50:35.518972       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:50:35.521248       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 13:50:35.525686       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:50:35.546386       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 13:50:35.546499       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 13:50:35.546554       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:50:35.546560       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 13:50:35.546596       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 13:50:35.546605       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 13:50:35.555379       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-403602" podCIDRs=["10.244.0.0/24"]
	I1124 13:50:50.454037       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6c7be965ca47218c4992b0b0d378ead0c5187796feee7a212158da7490e13458] <==
	I1124 13:50:37.561968       1 server_linux.go:53] "Using iptables proxy"
	I1124 13:50:37.643060       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 13:50:37.743451       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 13:50:37.743498       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1124 13:50:37.743635       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 13:50:37.774610       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:50:37.774695       1 server_linux.go:132] "Using iptables Proxier"
	I1124 13:50:37.783065       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 13:50:37.783505       1 server.go:527] "Version info" version="v1.34.1"
	I1124 13:50:37.783871       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:50:37.789014       1 config.go:309] "Starting node config controller"
	I1124 13:50:37.789050       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 13:50:37.789059       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 13:50:37.789148       1 config.go:200] "Starting service config controller"
	I1124 13:50:37.790029       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 13:50:37.789444       1 config.go:106] "Starting endpoint slice config controller"
	I1124 13:50:37.789528       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 13:50:37.790073       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 13:50:37.790076       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 13:50:37.890220       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 13:50:37.890291       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 13:50:37.890384       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [88ce89936da6514d143d879d8103a2adbf7b6fd98ef0185abfeb68595567f529] <==
	E1124 13:50:28.580536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:50:28.580599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:50:28.581820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 13:50:28.581882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 13:50:28.581954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:50:28.582007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:50:28.582052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:50:28.582097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 13:50:28.582188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:50:28.582137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 13:50:28.582314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:50:29.434512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:50:29.508612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 13:50:29.511354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:50:29.532155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:50:29.586571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 13:50:29.679795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:50:29.708039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 13:50:29.717736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 13:50:29.726369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 13:50:29.816737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 13:50:29.838270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:50:29.871041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:50:29.975580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1124 13:50:32.463865       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 13:50:35 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:35.918382    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-403602" podStartSLOduration=5.918357391 podStartE2EDuration="5.918357391s" podCreationTimestamp="2025-11-24 13:50:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:50:32.302771015 +0000 UTC m=+1.248919441" watchObservedRunningTime="2025-11-24 13:50:35.918357391 +0000 UTC m=+4.864505824"
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:36.269021    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88d22920-c2fd-4bdf-95ec-c2f4f5c22669-xtables-lock\") pod \"kindnet-hdcbn\" (UID: \"88d22920-c2fd-4bdf-95ec-c2f4f5c22669\") " pod="kube-system/kindnet-hdcbn"
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:36.269294    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88d22920-c2fd-4bdf-95ec-c2f4f5c22669-lib-modules\") pod \"kindnet-hdcbn\" (UID: \"88d22920-c2fd-4bdf-95ec-c2f4f5c22669\") " pod="kube-system/kindnet-hdcbn"
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:36.269329    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/88d22920-c2fd-4bdf-95ec-c2f4f5c22669-cni-cfg\") pod \"kindnet-hdcbn\" (UID: \"88d22920-c2fd-4bdf-95ec-c2f4f5c22669\") " pod="kube-system/kindnet-hdcbn"
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:36.269462    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn9bc\" (UniqueName: \"kubernetes.io/projected/88d22920-c2fd-4bdf-95ec-c2f4f5c22669-kube-api-access-cn9bc\") pod \"kindnet-hdcbn\" (UID: \"88d22920-c2fd-4bdf-95ec-c2f4f5c22669\") " pod="kube-system/kindnet-hdcbn"
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:36.269532    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a8814197-f505-433e-a55d-b0106f40e505-kube-proxy\") pod \"kube-proxy-fhwvd\" (UID: \"a8814197-f505-433e-a55d-b0106f40e505\") " pod="kube-system/kube-proxy-fhwvd"
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:36.269575    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8814197-f505-433e-a55d-b0106f40e505-lib-modules\") pod \"kube-proxy-fhwvd\" (UID: \"a8814197-f505-433e-a55d-b0106f40e505\") " pod="kube-system/kube-proxy-fhwvd"
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:36.269605    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8814197-f505-433e-a55d-b0106f40e505-xtables-lock\") pod \"kube-proxy-fhwvd\" (UID: \"a8814197-f505-433e-a55d-b0106f40e505\") " pod="kube-system/kube-proxy-fhwvd"
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:36.269663    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86jz6\" (UniqueName: \"kubernetes.io/projected/a8814197-f505-433e-a55d-b0106f40e505-kube-api-access-86jz6\") pod \"kube-proxy-fhwvd\" (UID: \"a8814197-f505-433e-a55d-b0106f40e505\") " pod="kube-system/kube-proxy-fhwvd"
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: E1124 13:50:36.385871    1452 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: E1124 13:50:36.385958    1452 projected.go:196] Error preparing data for projected volume kube-api-access-86jz6 for pod kube-system/kube-proxy-fhwvd: configmap "kube-root-ca.crt" not found
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: E1124 13:50:36.386181    1452 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8814197-f505-433e-a55d-b0106f40e505-kube-api-access-86jz6 podName:a8814197-f505-433e-a55d-b0106f40e505 nodeName:}" failed. No retries permitted until 2025-11-24 13:50:36.886143115 +0000 UTC m=+5.832291548 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-86jz6" (UniqueName: "kubernetes.io/projected/a8814197-f505-433e-a55d-b0106f40e505-kube-api-access-86jz6") pod "kube-proxy-fhwvd" (UID: "a8814197-f505-433e-a55d-b0106f40e505") : configmap "kube-root-ca.crt" not found
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: E1124 13:50:36.388456    1452 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: E1124 13:50:36.388503    1452 projected.go:196] Error preparing data for projected volume kube-api-access-cn9bc for pod kube-system/kindnet-hdcbn: configmap "kube-root-ca.crt" not found
	Nov 24 13:50:36 default-k8s-diff-port-403602 kubelet[1452]: E1124 13:50:36.388630    1452 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/88d22920-c2fd-4bdf-95ec-c2f4f5c22669-kube-api-access-cn9bc podName:88d22920-c2fd-4bdf-95ec-c2f4f5c22669 nodeName:}" failed. No retries permitted until 2025-11-24 13:50:36.888594418 +0000 UTC m=+5.834742848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cn9bc" (UniqueName: "kubernetes.io/projected/88d22920-c2fd-4bdf-95ec-c2f4f5c22669-kube-api-access-cn9bc") pod "kindnet-hdcbn" (UID: "88d22920-c2fd-4bdf-95ec-c2f4f5c22669") : configmap "kube-root-ca.crt" not found
	Nov 24 13:50:38 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:38.269127    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-hdcbn" podStartSLOduration=2.269103119 podStartE2EDuration="2.269103119s" podCreationTimestamp="2025-11-24 13:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:50:38.268147344 +0000 UTC m=+7.214295789" watchObservedRunningTime="2025-11-24 13:50:38.269103119 +0000 UTC m=+7.215251552"
	Nov 24 13:50:38 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:38.581632    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fhwvd" podStartSLOduration=2.581605439 podStartE2EDuration="2.581605439s" podCreationTimestamp="2025-11-24 13:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:50:38.46004843 +0000 UTC m=+7.406196863" watchObservedRunningTime="2025-11-24 13:50:38.581605439 +0000 UTC m=+7.527753872"
	Nov 24 13:50:48 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:48.619268    1452 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 13:50:48 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:48.762437    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m528s\" (UniqueName: \"kubernetes.io/projected/649238f9-bcbc-4569-bff7-9488834e21c8-kube-api-access-m528s\") pod \"storage-provisioner\" (UID: \"649238f9-bcbc-4569-bff7-9488834e21c8\") " pod="kube-system/storage-provisioner"
	Nov 24 13:50:48 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:48.762511    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv67j\" (UniqueName: \"kubernetes.io/projected/f86f95a0-9e92-429a-9dd7-76843d8d6af1-kube-api-access-fv67j\") pod \"coredns-66bc5c9577-hrj7f\" (UID: \"f86f95a0-9e92-429a-9dd7-76843d8d6af1\") " pod="kube-system/coredns-66bc5c9577-hrj7f"
	Nov 24 13:50:48 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:48.762565    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f86f95a0-9e92-429a-9dd7-76843d8d6af1-config-volume\") pod \"coredns-66bc5c9577-hrj7f\" (UID: \"f86f95a0-9e92-429a-9dd7-76843d8d6af1\") " pod="kube-system/coredns-66bc5c9577-hrj7f"
	Nov 24 13:50:48 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:48.762613    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/649238f9-bcbc-4569-bff7-9488834e21c8-tmp\") pod \"storage-provisioner\" (UID: \"649238f9-bcbc-4569-bff7-9488834e21c8\") " pod="kube-system/storage-provisioner"
	Nov 24 13:50:50 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:50.293248    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-hrj7f" podStartSLOduration=14.293222238 podStartE2EDuration="14.293222238s" podCreationTimestamp="2025-11-24 13:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:50:50.292466414 +0000 UTC m=+19.238614913" watchObservedRunningTime="2025-11-24 13:50:50.293222238 +0000 UTC m=+19.239370675"
	Nov 24 13:50:50 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:50.312367    1452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.312254715 podStartE2EDuration="13.312254715s" podCreationTimestamp="2025-11-24 13:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:50:50.308792069 +0000 UTC m=+19.254940501" watchObservedRunningTime="2025-11-24 13:50:50.312254715 +0000 UTC m=+19.258403150"
	Nov 24 13:50:52 default-k8s-diff-port-403602 kubelet[1452]: I1124 13:50:52.587702    1452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv67r\" (UniqueName: \"kubernetes.io/projected/6d25c78c-49dd-42e4-ba09-01c98b5c9084-kube-api-access-gv67r\") pod \"busybox\" (UID: \"6d25c78c-49dd-42e4-ba09-01c98b5c9084\") " pod="default/busybox"
	
	
	==> storage-provisioner [d1388e232ef6e2ad16cd2fbe73c16bd9d1d16e11c9449b5c1ba02959d4f60694] <==
	I1124 13:50:49.361758       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 13:50:49.371372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:49.385381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 13:50:49.385635       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 13:50:49.385909       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-403602_870f82d2-7450-46e7-b233-caa243111756!
	I1124 13:50:49.387036       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"245e233b-0e09-4e94-bc5c-af1b2abac362", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-403602_870f82d2-7450-46e7-b233-caa243111756 became leader
	W1124 13:50:49.393522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:49.406228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 13:50:49.486397       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-403602_870f82d2-7450-46e7-b233-caa243111756!
	W1124 13:50:51.412072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:51.420614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:53.425342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:53.431574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:55.437386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:55.448112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:57.452240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:57.458598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:59.463817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:50:59.471790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:51:01.475485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:51:01.480517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:51:03.484635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:51:03.558247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:51:05.562202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:51:05.568909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-403602 -n default-k8s-diff-port-403602
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-403602 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (15.60s)

                                                
                                    

Test pass (303/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 12.1
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.34.1/json-events 11.2
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.24
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
20 TestDownloadOnlyKic 0.44
21 TestBinaryMirror 0.86
22 TestOffline 49.76
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 128.76
29 TestAddons/serial/Volcano 41.31
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 10.51
35 TestAddons/parallel/Registry 15.31
36 TestAddons/parallel/RegistryCreds 0.7
37 TestAddons/parallel/Ingress 20.14
38 TestAddons/parallel/InspektorGadget 11.02
39 TestAddons/parallel/MetricsServer 5.7
41 TestAddons/parallel/CSI 56.24
42 TestAddons/parallel/Headlamp 16.94
43 TestAddons/parallel/CloudSpanner 5.53
44 TestAddons/parallel/LocalPath 52.89
45 TestAddons/parallel/NvidiaDevicePlugin 6.51
46 TestAddons/parallel/Yakd 10.8
47 TestAddons/parallel/AmdGpuDevicePlugin 5.52
48 TestAddons/StoppedEnableDisable 12.84
49 TestCertOptions 24.57
50 TestCertExpiration 211.3
52 TestForceSystemdFlag 31.91
53 TestForceSystemdEnv 26.47
54 TestDockerEnvContainerd 38.85
58 TestErrorSpam/setup 23.74
59 TestErrorSpam/start 0.7
60 TestErrorSpam/status 1.02
61 TestErrorSpam/pause 1.53
62 TestErrorSpam/unpause 1.61
63 TestErrorSpam/stop 12.13
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 41.72
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.19
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.06
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.17
75 TestFunctional/serial/CacheCmd/cache/add_local 1.95
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 65.22
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.34
86 TestFunctional/serial/LogsFileCmd 1.38
87 TestFunctional/serial/InvalidService 4.17
89 TestFunctional/parallel/ConfigCmd 0.51
90 TestFunctional/parallel/DashboardCmd 11.17
91 TestFunctional/parallel/DryRun 0.44
92 TestFunctional/parallel/InternationalLanguage 0.19
93 TestFunctional/parallel/StatusCmd 1.03
97 TestFunctional/parallel/ServiceCmdConnect 12.57
98 TestFunctional/parallel/AddonsCmd 0.17
99 TestFunctional/parallel/PersistentVolumeClaim 29.71
101 TestFunctional/parallel/SSHCmd 0.65
102 TestFunctional/parallel/CpCmd 1.98
103 TestFunctional/parallel/MySQL 24.54
104 TestFunctional/parallel/FileSync 0.34
105 TestFunctional/parallel/CertSync 2.02
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.69
113 TestFunctional/parallel/License 0.37
114 TestFunctional/parallel/Version/short 0.08
115 TestFunctional/parallel/Version/components 0.52
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
120 TestFunctional/parallel/ImageCommands/ImageBuild 5.06
121 TestFunctional/parallel/ImageCommands/Setup 1.76
122 TestFunctional/parallel/ServiceCmd/DeployApp 9.18
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.57
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.24
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.21
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.06
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.95
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.67
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
135 TestFunctional/parallel/ServiceCmd/List 0.52
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
137 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
138 TestFunctional/parallel/ServiceCmd/Format 0.4
139 TestFunctional/parallel/ServiceCmd/URL 0.42
140 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/MountCmd/any-port 7.83
148 TestFunctional/parallel/ProfileCmd/profile_list 0.44
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
153 TestFunctional/parallel/MountCmd/specific-port 2.05
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.79
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 114.18
163 TestMultiControlPlane/serial/DeployApp 5.77
164 TestMultiControlPlane/serial/PingHostFromPods 1.24
165 TestMultiControlPlane/serial/AddWorkerNode 27.26
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.95
168 TestMultiControlPlane/serial/CopyFile 18.4
169 TestMultiControlPlane/serial/StopSecondaryNode 12.81
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.75
171 TestMultiControlPlane/serial/RestartSecondaryNode 9.11
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.94
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 96.15
174 TestMultiControlPlane/serial/DeleteSecondaryNode 9.57
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.74
176 TestMultiControlPlane/serial/StopCluster 36.27
177 TestMultiControlPlane/serial/RestartCluster 55.32
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.73
179 TestMultiControlPlane/serial/AddSecondaryNode 46.34
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.98
185 TestJSONOutput/start/Command 41.21
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.71
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.63
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.89
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 38.09
211 TestKicCustomNetwork/use_default_bridge_network 22.93
212 TestKicExistingNetwork 23.67
213 TestKicCustomSubnet 27.55
214 TestKicStaticIP 24.2
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 54.01
219 TestMountStart/serial/StartWithMountFirst 7.53
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 7.69
222 TestMountStart/serial/VerifyMountSecond 0.29
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.29
225 TestMountStart/serial/Stop 1.27
226 TestMountStart/serial/RestartStopped 7.82
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 64.06
231 TestMultiNode/serial/DeployApp2Nodes 5.03
232 TestMultiNode/serial/PingHostFrom2Pods 0.83
233 TestMultiNode/serial/AddNode 24.23
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.68
236 TestMultiNode/serial/CopyFile 10.33
237 TestMultiNode/serial/StopNode 2.34
238 TestMultiNode/serial/StartAfterStop 7.1
239 TestMultiNode/serial/RestartKeepsNodes 71.9
240 TestMultiNode/serial/DeleteNode 5.35
241 TestMultiNode/serial/StopMultiNode 24.11
242 TestMultiNode/serial/RestartMultiNode 47.08
243 TestMultiNode/serial/ValidateNameConflict 26.28
248 TestPreload 121.91
250 TestScheduledStopUnix 99.46
253 TestInsufficientStorage 12.43
254 TestRunningBinaryUpgrade 64.15
256 TestKubernetesUpgrade 344.66
257 TestMissingContainerUpgrade 140.31
259 TestStoppedBinaryUpgrade/Setup 2.76
260 TestPause/serial/Start 45.95
261 TestStoppedBinaryUpgrade/Upgrade 120.04
262 TestPause/serial/SecondStartNoReconfiguration 6.48
263 TestPause/serial/Pause 0.74
264 TestPause/serial/VerifyStatus 0.4
265 TestPause/serial/Unpause 1.41
266 TestPause/serial/PauseAgain 1.34
267 TestPause/serial/DeletePaused 3.28
268 TestPause/serial/VerifyDeletedResources 0.51
269 TestStoppedBinaryUpgrade/MinikubeLogs 1.29
278 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
279 TestNoKubernetes/serial/StartWithK8s 27.25
287 TestNetworkPlugins/group/false 4.65
291 TestNoKubernetes/serial/StartWithStopK8s 23.9
292 TestNoKubernetes/serial/Start 7.14
293 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
294 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
295 TestNoKubernetes/serial/ProfileList 16.24
296 TestNoKubernetes/serial/Stop 1.33
297 TestNoKubernetes/serial/StartNoArgs 8.78
299 TestStartStop/group/old-k8s-version/serial/FirstStart 51.92
300 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
302 TestStartStop/group/no-preload/serial/FirstStart 52.57
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1
306 TestStartStop/group/old-k8s-version/serial/Stop 12.17
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.87
308 TestStartStop/group/no-preload/serial/Stop 12.12
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
310 TestStartStop/group/old-k8s-version/serial/SecondStart 44.37
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
312 TestStartStop/group/no-preload/serial/SecondStart 47.79
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
317 TestStartStop/group/old-k8s-version/serial/Pause 2.92
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
320 TestStartStop/group/embed-certs/serial/FirstStart 47.15
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
322 TestStartStop/group/no-preload/serial/Pause 3.52
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.15
326 TestStartStop/group/newest-cni/serial/FirstStart 35.7
327 TestNetworkPlugins/group/auto/Start 45.27
328 TestStartStop/group/newest-cni/serial/DeployApp 0
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.6
331 TestStartStop/group/newest-cni/serial/Stop 1.45
332 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
333 TestStartStop/group/newest-cni/serial/SecondStart 11.37
335 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
338 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.17
339 TestStartStop/group/newest-cni/serial/Pause 3.4
340 TestStartStop/group/embed-certs/serial/Stop 12.34
341 TestNetworkPlugins/group/kindnet/Start 42.81
342 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
343 TestStartStop/group/embed-certs/serial/SecondStart 53.43
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.9
345 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.12
346 TestNetworkPlugins/group/auto/KubeletFlags 0.32
347 TestNetworkPlugins/group/auto/NetCatPod 9.21
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.29
349 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.34
350 TestNetworkPlugins/group/auto/DNS 0.19
351 TestNetworkPlugins/group/auto/Localhost 0.14
352 TestNetworkPlugins/group/auto/HairPin 0.14
353 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
354 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
355 TestNetworkPlugins/group/kindnet/NetCatPod 8.21
356 TestNetworkPlugins/group/calico/Start 59.23
357 TestNetworkPlugins/group/kindnet/DNS 0.15
358 TestNetworkPlugins/group/kindnet/Localhost 0.12
359 TestNetworkPlugins/group/kindnet/HairPin 0.12
360 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
361 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
362 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
363 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
364 TestStartStop/group/embed-certs/serial/Pause 3.48
365 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
366 TestNetworkPlugins/group/custom-flannel/Start 56.55
367 TestNetworkPlugins/group/enable-default-cni/Start 42.32
368 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.47
369 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.28
370 TestNetworkPlugins/group/flannel/Start 55.51
371 TestNetworkPlugins/group/calico/ControllerPod 6.01
372 TestNetworkPlugins/group/calico/KubeletFlags 0.37
373 TestNetworkPlugins/group/calico/NetCatPod 8.22
374 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
375 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.25
376 TestNetworkPlugins/group/calico/DNS 0.16
377 TestNetworkPlugins/group/calico/Localhost 0.14
378 TestNetworkPlugins/group/calico/HairPin 0.14
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
382 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
383 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.23
384 TestNetworkPlugins/group/custom-flannel/DNS 0.19
385 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
386 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
387 TestNetworkPlugins/group/bridge/Start 70.76
388 TestNetworkPlugins/group/flannel/ControllerPod 6.01
389 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
390 TestNetworkPlugins/group/flannel/NetCatPod 8.22
391 TestNetworkPlugins/group/flannel/DNS 0.15
392 TestNetworkPlugins/group/flannel/Localhost 0.13
393 TestNetworkPlugins/group/flannel/HairPin 0.14
394 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
395 TestNetworkPlugins/group/bridge/NetCatPod 8.18
396 TestNetworkPlugins/group/bridge/DNS 0.13
397 TestNetworkPlugins/group/bridge/Localhost 0.11
398 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.28.0/json-events (12.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-603823 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-603823 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (12.10071484s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (12.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1124 13:13:53.587847  374122 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1124 13:13:53.587988  374122 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-603823
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-603823: exit status 85 (83.028227ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-603823 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-603823 │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:13:41
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:13:41.545639  374134 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:13:41.545883  374134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:13:41.545892  374134 out.go:374] Setting ErrFile to fd 2...
	I1124 13:13:41.545896  374134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:13:41.546101  374134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
	W1124 13:13:41.546234  374134 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21932-370498/.minikube/config/config.json: open /home/jenkins/minikube-integration/21932-370498/.minikube/config/config.json: no such file or directory
	I1124 13:13:41.546699  374134 out.go:368] Setting JSON to true
	I1124 13:13:41.547710  374134 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6961,"bootTime":1763983061,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:13:41.547774  374134 start.go:143] virtualization: kvm guest
	I1124 13:13:41.553251  374134 out.go:99] [download-only-603823] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1124 13:13:41.553531  374134 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball: no such file or directory
	I1124 13:13:41.553544  374134 notify.go:221] Checking for updates...
	I1124 13:13:41.555306  374134 out.go:171] MINIKUBE_LOCATION=21932
	I1124 13:13:41.557165  374134 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:13:41.559235  374134 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:13:41.561054  374134 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-370498/.minikube
	I1124 13:13:41.562848  374134 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1124 13:13:41.565622  374134 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 13:13:41.566005  374134 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:13:41.589356  374134 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:13:41.589457  374134 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:13:41.651731  374134 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-24 13:13:41.641825253 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:13:41.651865  374134 docker.go:319] overlay module found
	I1124 13:13:41.653592  374134 out.go:99] Using the docker driver based on user configuration
	I1124 13:13:41.653637  374134 start.go:309] selected driver: docker
	I1124 13:13:41.653644  374134 start.go:927] validating driver "docker" against <nil>
	I1124 13:13:41.653738  374134 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:13:41.708203  374134 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-24 13:13:41.698203571 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:13:41.708367  374134 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:13:41.708883  374134 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1124 13:13:41.709077  374134 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 13:13:41.711353  374134 out.go:171] Using Docker driver with root privileges
	I1124 13:13:41.713015  374134 cni.go:84] Creating CNI manager for ""
	I1124 13:13:41.713112  374134 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:13:41.713126  374134 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:13:41.713228  374134 start.go:353] cluster config:
	{Name:download-only-603823 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-603823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:13:41.714753  374134 out.go:99] Starting "download-only-603823" primary control-plane node in "download-only-603823" cluster
	I1124 13:13:41.714776  374134 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 13:13:41.716125  374134 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:13:41.716207  374134 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 13:13:41.716272  374134 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:13:41.733376  374134 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 13:13:41.733598  374134 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1124 13:13:41.733690  374134 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 13:13:42.064805  374134 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1124 13:13:42.064843  374134 cache.go:65] Caching tarball of preloaded images
	I1124 13:13:42.065128  374134 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 13:13:42.067169  374134 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1124 13:13:42.067201  374134 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1124 13:13:42.165496  374134 preload.go:295] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1124 13:13:42.165633  374134 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1124 13:13:49.167628  374134 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	
	
	* The control-plane node download-only-603823 host does not exist
	  To start a cluster, run: "minikube start -p download-only-603823"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-603823
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (11.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-402605 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-402605 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.197539899s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (11.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1124 13:14:05.267413  374122 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1124 13:14:05.267521  374122 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-402605
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-402605: exit status 85 (80.614404ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-603823 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-603823 │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │ 24 Nov 25 13:13 UTC │
	│ delete  │ -p download-only-603823                                                                                                                                                               │ download-only-603823 │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │ 24 Nov 25 13:13 UTC │
	│ start   │ -o=json --download-only -p download-only-402605 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-402605 │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:13:54
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:13:54.127828  374504 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:13:54.127965  374504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:13:54.127970  374504 out.go:374] Setting ErrFile to fd 2...
	I1124 13:13:54.127974  374504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:13:54.128179  374504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
	I1124 13:13:54.128641  374504 out.go:368] Setting JSON to true
	I1124 13:13:54.129612  374504 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6973,"bootTime":1763983061,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:13:54.129708  374504 start.go:143] virtualization: kvm guest
	I1124 13:13:54.131730  374504 out.go:99] [download-only-402605] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:13:54.131958  374504 notify.go:221] Checking for updates...
	I1124 13:13:54.133316  374504 out.go:171] MINIKUBE_LOCATION=21932
	I1124 13:13:54.134711  374504 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:13:54.136061  374504 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:13:54.138382  374504 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-370498/.minikube
	I1124 13:13:54.140168  374504 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1124 13:13:54.142739  374504 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 13:13:54.143079  374504 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:13:54.169839  374504 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:13:54.169951  374504 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:13:54.224525  374504 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-24 13:13:54.215009591 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:13:54.224645  374504 docker.go:319] overlay module found
	I1124 13:13:54.226622  374504 out.go:99] Using the docker driver based on user configuration
	I1124 13:13:54.226654  374504 start.go:309] selected driver: docker
	I1124 13:13:54.226660  374504 start.go:927] validating driver "docker" against <nil>
	I1124 13:13:54.226749  374504 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:13:54.283588  374504 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-24 13:13:54.273926588 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:13:54.283793  374504 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:13:54.284546  374504 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1124 13:13:54.284755  374504 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 13:13:54.286594  374504 out.go:171] Using Docker driver with root privileges
	I1124 13:13:54.287802  374504 cni.go:84] Creating CNI manager for ""
	I1124 13:13:54.287888  374504 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:13:54.287907  374504 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:13:54.288015  374504 start.go:353] cluster config:
	{Name:download-only-402605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-402605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:13:54.289504  374504 out.go:99] Starting "download-only-402605" primary control-plane node in "download-only-402605" cluster
	I1124 13:13:54.289526  374504 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 13:13:54.290708  374504 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:13:54.290758  374504 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 13:13:54.290947  374504 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:13:54.308287  374504 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 13:13:54.308434  374504 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1124 13:13:54.308452  374504 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1124 13:13:54.308458  374504 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1124 13:13:54.308467  374504 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1124 13:13:54.640811  374504 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1124 13:13:54.640862  374504 cache.go:65] Caching tarball of preloaded images
	I1124 13:13:54.641074  374504 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 13:13:54.642988  374504 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1124 13:13:54.643020  374504 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1124 13:13:54.737020  374504 preload.go:295] Got checksum from GCS API "5d6e976daeaa84851976fc4d674fd8f4"
	I1124 13:13:54.737072  374504 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:5d6e976daeaa84851976fc4d674fd8f4 -> /home/jenkins/minikube-integration/21932-370498/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-402605 host does not exist
	  To start a cluster, run: "minikube start -p download-only-402605"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-402605
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-065255 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-065255" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-065255
--- PASS: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestBinaryMirror (0.86s)

                                                
                                                
=== RUN   TestBinaryMirror
I1124 13:14:06.486907  374122 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-266962 --alsologtostderr --binary-mirror http://127.0.0.1:37733 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-266962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-266962
--- PASS: TestBinaryMirror (0.86s)

                                                
                                    
x
+
TestOffline (49.76s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-104814 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-104814 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (46.711763863s)
helpers_test.go:175: Cleaning up "offline-containerd-104814" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-104814
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-104814: (3.043057611s)
--- PASS: TestOffline (49.76s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-093377
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-093377: exit status 85 (73.417424ms)

                                                
                                                
-- stdout --
	* Profile "addons-093377" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-093377"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-093377
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-093377: exit status 85 (72.58988ms)

                                                
                                                
-- stdout --
	* Profile "addons-093377" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-093377"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (128.76s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-093377 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-093377 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m8.758300012s)
--- PASS: TestAddons/Setup (128.76s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:868: volcano-scheduler stabilized in 17.625229ms
addons_test.go:884: volcano-controller stabilized in 17.667649ms
addons_test.go:876: volcano-admission stabilized in 17.730654ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-c2wx8" [12e5283b-d959-489f-9522-8d2d49a1cd4d] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004439804s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-hdsr6" [3527c901-59f5-4f30-925d-d1b0dc7233e7] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004357771s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-m5dlq" [9b45d68e-88da-4340-8c86-0eb2b09d21a4] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004542288s
addons_test.go:903: (dbg) Run:  kubectl --context addons-093377 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-093377 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-093377 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [2bb0508c-1a52-4355-8e73-fc8f014088a5] Pending
helpers_test.go:352: "test-job-nginx-0" [2bb0508c-1a52-4355-8e73-fc8f014088a5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [2bb0508c-1a52-4355-8e73-fc8f014088a5] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004282074s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-093377 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-093377 addons disable volcano --alsologtostderr -v=1: (11.94921325s)
--- PASS: TestAddons/serial/Volcano (41.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-093377 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-093377 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-093377 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-093377 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8f39068a-e354-4a06-976e-62173524b0bb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8f39068a-e354-4a06-976e-62173524b0bb] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003969215s
addons_test.go:694: (dbg) Run:  kubectl --context addons-093377 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-093377 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-093377 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.11817ms
I1124 13:17:17.122420  374122 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1124 13:17:17.122450  374122 kapi.go:107] duration metric: took 3.790476ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-knr6z" [f15006dc-409a-4fdc-8d04-066a44aabc83] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003779726s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-wxhfc" [619b1297-169f-4614-812e-120da1c04f83] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004106584s
addons_test.go:392: (dbg) Run:  kubectl --context addons-093377 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-093377 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-093377 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.470015675s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-093377 ip
2025/11/24 13:17:31 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-093377 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.31s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.7s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.845921ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-093377
addons_test.go:332: (dbg) Run:  kubectl --context addons-093377 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-093377 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.70s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-093377 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-093377 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-093377 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [e40b2790-6198-408c-b02c-c01deaa28fcd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [e40b2790-6198-408c-b02c-c01deaa28fcd] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004272944s
I1124 13:17:42.883880  374122 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-093377 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-093377 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-093377 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-093377 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-093377 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-093377 addons disable ingress --alsologtostderr -v=1: (7.890352756s)
--- PASS: TestAddons/parallel/Ingress (20.14s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.02s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-bgb5m" [da49bea2-5b03-4c21-ba41-9da1f5d24513] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00431464s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-093377 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-093377 addons disable inspektor-gadget --alsologtostderr -v=1: (6.015918049s)
--- PASS: TestAddons/parallel/InspektorGadget (11.02s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.169679ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-dhg9c" [1ec0800f-8c07-4189-b669-2ba1b2d57cc9] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003355378s
addons_test.go:463: (dbg) Run:  kubectl --context addons-093377 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-093377 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.70s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1124 13:17:17.118685  374122 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.801062ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-093377 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-093377 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [6df92215-fcf3-4540-a9bf-05aee6b220d4] Pending
helpers_test.go:352: "task-pv-pod" [6df92215-fcf3-4540-a9bf-05aee6b220d4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [6df92215-fcf3-4540-a9bf-05aee6b220d4] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004338293s
addons_test.go:572: (dbg) Run:  kubectl --context addons-093377 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-093377 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-093377 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-093377 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-093377 delete pod task-pv-pod: (1.057571154s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-093377 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-093377 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-093377 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [cb88d3d9-ca3b-448a-bff6-27d3224f6ddc] Pending
helpers_test.go:352: "task-pv-pod-restore" [cb88d3d9-ca3b-448a-bff6-27d3224f6ddc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [cb88d3d9-ca3b-448a-bff6-27d3224f6ddc] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004396757s
addons_test.go:614: (dbg) Run:  kubectl --context addons-093377 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-093377 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-093377 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-093377 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-093377 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-093377 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.601254366s)
--- PASS: TestAddons/parallel/CSI (56.24s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-093377 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-dgxpf" [bdd6132d-a815-4afb-8508-00774728bfe2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-dgxpf" [bdd6132d-a815-4afb-8508-00774728bfe2] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003601471s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-093377 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-093377 addons disable headlamp --alsologtostderr -v=1: (6.100135708s)
--- PASS: TestAddons/parallel/Headlamp (16.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-kbldh" [836c5101-7ccf-4fd6-a144-13162a269748] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004209125s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-093377 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.89s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-093377 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-093377 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-093377 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [b61023c4-0eb2-443b-9a40-96dcdedfc0df] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [b61023c4-0eb2-443b-9a40-96dcdedfc0df] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [b61023c4-0eb2-443b-9a40-96dcdedfc0df] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003884106s
addons_test.go:967: (dbg) Run:  kubectl --context addons-093377 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-093377 ssh "cat /opt/local-path-provisioner/pvc-6093aa14-6dc5-41bb-974d-d3f08551d50b_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-093377 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-093377 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-093377 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-093377 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.930843207s)
--- PASS: TestAddons/parallel/LocalPath (52.89s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-d4xt6" [4b96d69c-517b-40eb-bc36-c4e0485cfada] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.002918246s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-093377 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-srb9b" [ff20603c-2a1a-4d7b-97f4-b455311d5bcb] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004028191s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-093377 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-093377 addons disable yakd --alsologtostderr -v=1: (5.792939631s)
--- PASS: TestAddons/parallel/Yakd (10.80s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-2p5f7" [07938bed-f0ba-4fbb-a3c9-2d0b6cab7400] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003742573s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-093377 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.52s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.84s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-093377
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-093377: (12.523937744s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-093377
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-093377
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-093377
--- PASS: TestAddons/StoppedEnableDisable (12.84s)

                                                
                                    
x
+
TestCertOptions (24.57s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-342221 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-342221 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (21.829064023s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-342221 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-342221 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-342221 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-342221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-342221
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-342221: (2.066092014s)
--- PASS: TestCertOptions (24.57s)

                                                
                                    
x
+
TestCertExpiration (211.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-099863 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-099863 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (21.882669984s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-099863 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-099863 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.605456531s)
helpers_test.go:175: Cleaning up "cert-expiration-099863" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-099863
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-099863: (2.813937306s)
--- PASS: TestCertExpiration (211.30s)

                                                
                                    
x
+
TestForceSystemdFlag (31.91s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-775412 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-775412 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (29.158783549s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-775412 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-775412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-775412
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-775412: (2.435549997s)
--- PASS: TestForceSystemdFlag (31.91s)

                                                
                                    
x
+
TestForceSystemdEnv (26.47s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-875063 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-875063 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (24.011450646s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-875063 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-875063" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-875063
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-875063: (2.114975194s)
--- PASS: TestForceSystemdEnv (26.47s)

                                                
                                    
x
+
TestDockerEnvContainerd (38.85s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-464330 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-464330 --driver=docker  --container-runtime=containerd: (22.684284752s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-464330"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-464330": (1.027059982s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXvA52eX/agent.397912" SSH_AGENT_PID="397913" DOCKER_HOST=ssh://docker@127.0.0.1:33148 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXvA52eX/agent.397912" SSH_AGENT_PID="397913" DOCKER_HOST=ssh://docker@127.0.0.1:33148 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXvA52eX/agent.397912" SSH_AGENT_PID="397913" DOCKER_HOST=ssh://docker@127.0.0.1:33148 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (2.111684632s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXvA52eX/agent.397912" SSH_AGENT_PID="397913" DOCKER_HOST=ssh://docker@127.0.0.1:33148 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-464330" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-464330
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-464330: (1.986561464s)
--- PASS: TestDockerEnvContainerd (38.85s)

                                                
                                    
x
+
TestErrorSpam/setup (23.74s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-352736 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-352736 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-352736 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-352736 --driver=docker  --container-runtime=containerd: (23.735091081s)
--- PASS: TestErrorSpam/setup (23.74s)

                                                
                                    
x
+
TestErrorSpam/start (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-352736 --log_dir /tmp/nospam-352736 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-352736 --log_dir /tmp/nospam-352736 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-352736 --log_dir /tmp/nospam-352736 start --dry-run
--- PASS: TestErrorSpam/start (0.70s)

                                                
                                    
x
+
TestErrorSpam/status (1.02s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-352736 --log_dir /tmp/nospam-352736 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-352736 --log_dir /tmp/nospam-352736 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-352736 --log_dir /tmp/nospam-352736 status
--- PASS: TestErrorSpam/status (1.02s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-352736 --log_dir /tmp/nospam-352736 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-352736 --log_dir /tmp/nospam-352736 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-352736 --log_dir /tmp/nospam-352736 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-352736 --log_dir /tmp/nospam-352736 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-352736 --log_dir /tmp/nospam-352736 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-352736 --log_dir /tmp/nospam-352736 unpause
--- PASS: TestErrorSpam/unpause (1.61s)

                                                
                                    
x
+
TestErrorSpam/stop (12.13s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-352736 --log_dir /tmp/nospam-352736 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-352736 --log_dir /tmp/nospam-352736 stop: (11.906924521s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-352736 --log_dir /tmp/nospam-352736 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-352736 --log_dir /tmp/nospam-352736 stop
--- PASS: TestErrorSpam/stop (12.13s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21932-370498/.minikube/files/etc/test/nested/copy/374122/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.72s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-420317 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-420317 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (41.722307445s)
--- PASS: TestFunctional/serial/StartWithProxy (41.72s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.19s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1124 13:20:58.451114  374122 config.go:182] Loaded profile config "functional-420317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-420317 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-420317 --alsologtostderr -v=8: (6.186537902s)
functional_test.go:678: soft start took 6.187285809s for "functional-420317" cluster.
I1124 13:21:04.638013  374122 config.go:182] Loaded profile config "functional-420317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.19s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-420317 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-420317 cache add registry.k8s.io/pause:3.3: (1.130306055s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-420317 cache add registry.k8s.io/pause:latest: (1.052183303s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-420317 /tmp/TestFunctionalserialCacheCmdcacheadd_local4253401817/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 cache add minikube-local-cache-test:functional-420317
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-420317 cache add minikube-local-cache-test:functional-420317: (1.588502669s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 cache delete minikube-local-cache-test:functional-420317
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-420317
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-420317 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (297.942879ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 kubectl -- --context functional-420317 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-420317 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (65.22s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-420317 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1124 13:21:16.186807  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:21:16.193259  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:21:16.204694  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:21:16.226099  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:21:16.267584  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:21:16.349123  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:21:16.510714  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:21:16.832444  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:21:17.474555  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:21:18.756201  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:21:21.319107  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:21:26.441437  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:21:36.683595  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:21:57.165177  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-420317 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m5.216024006s)
functional_test.go:776: restart took 1m5.216219111s for "functional-420317" cluster.
I1124 13:22:17.577822  374122 config.go:182] Loaded profile config "functional-420317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (65.22s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-420317 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-420317 logs: (1.335499136s)
--- PASS: TestFunctional/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 logs --file /tmp/TestFunctionalserialLogsFileCmd3673116452/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-420317 logs --file /tmp/TestFunctionalserialLogsFileCmd3673116452/001/logs.txt: (1.38212387s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.17s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-420317 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-420317
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-420317: exit status 115 (368.183371ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31790 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-420317 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.17s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-420317 config get cpus: exit status 14 (89.455132ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-420317 config get cpus: exit status 14 (97.189245ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-420317 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-420317 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 419371: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.17s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-420317 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-420317 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (185.524704ms)

                                                
                                                
-- stdout --
	* [functional-420317] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-370498/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-370498/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:22:39.620309  418612 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:22:39.620589  418612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:22:39.620600  418612 out.go:374] Setting ErrFile to fd 2...
	I1124 13:22:39.620606  418612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:22:39.620811  418612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
	I1124 13:22:39.621347  418612 out.go:368] Setting JSON to false
	I1124 13:22:39.622417  418612 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7499,"bootTime":1763983061,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:22:39.622492  418612 start.go:143] virtualization: kvm guest
	I1124 13:22:39.624935  418612 out.go:179] * [functional-420317] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:22:39.626429  418612 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:22:39.626451  418612 notify.go:221] Checking for updates...
	I1124 13:22:39.629004  418612 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:22:39.630467  418612 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:22:39.631790  418612 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-370498/.minikube
	I1124 13:22:39.636563  418612 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:22:39.638037  418612 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:22:39.639979  418612 config.go:182] Loaded profile config "functional-420317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:22:39.640611  418612 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:22:39.664287  418612 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:22:39.664385  418612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:22:39.720716  418612 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 13:22:39.709993045 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:22:39.720835  418612 docker.go:319] overlay module found
	I1124 13:22:39.722950  418612 out.go:179] * Using the docker driver based on existing profile
	I1124 13:22:39.724213  418612 start.go:309] selected driver: docker
	I1124 13:22:39.724229  418612 start.go:927] validating driver "docker" against &{Name:functional-420317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-420317 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:22:39.724333  418612 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:22:39.726396  418612 out.go:203] 
	W1124 13:22:39.728139  418612 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1124 13:22:39.729377  418612 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-420317 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-420317 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-420317 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (187.90736ms)

                                                
                                                
-- stdout --
	* [functional-420317] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-370498/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-370498/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:22:40.049963  418973 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:22:40.050097  418973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:22:40.050107  418973 out.go:374] Setting ErrFile to fd 2...
	I1124 13:22:40.050111  418973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:22:40.050440  418973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
	I1124 13:22:40.050864  418973 out.go:368] Setting JSON to false
	I1124 13:22:40.051850  418973 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7499,"bootTime":1763983061,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:22:40.051935  418973 start.go:143] virtualization: kvm guest
	I1124 13:22:40.053948  418973 out.go:179] * [functional-420317] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1124 13:22:40.055568  418973 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:22:40.055591  418973 notify.go:221] Checking for updates...
	I1124 13:22:40.058343  418973 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:22:40.059674  418973 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:22:40.061039  418973 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-370498/.minikube
	I1124 13:22:40.065525  418973 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:22:40.066998  418973 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:22:40.068863  418973 config.go:182] Loaded profile config "functional-420317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:22:40.069422  418973 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:22:40.095115  418973 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:22:40.095244  418973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:22:40.158576  418973 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 13:22:40.148314452 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:22:40.158717  418973 docker.go:319] overlay module found
	I1124 13:22:40.161081  418973 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1124 13:22:40.162427  418973 start.go:309] selected driver: docker
	I1124 13:22:40.162450  418973 start.go:927] validating driver "docker" against &{Name:functional-420317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-420317 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:22:40.162607  418973 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:22:40.164515  418973 out.go:203] 
	W1124 13:22:40.165938  418973 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 13:22:40.167252  418973 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 status -o json
E1124 13:22:38.127149  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-420317 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-420317 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-pjlhb" [8b87d263-01db-4983-8ac7-40ce6a0b6a93] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-pjlhb" [8b87d263-01db-4983-8ac7-40ce6a0b6a93] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.003936666s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32185
functional_test.go:1680: http://192.168.49.2:32185: success! body:
Request served by hello-node-connect-7d85dfc575-pjlhb

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32185
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [9963f0d4-111c-46c8-bc51-78aa5af05375] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003876034s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-420317 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-420317 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-420317 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-420317 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ddd15d54-77fc-4d10-a23f-b27c04ca82f4] Pending
helpers_test.go:352: "sp-pod" [ddd15d54-77fc-4d10-a23f-b27c04ca82f4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [ddd15d54-77fc-4d10-a23f-b27c04ca82f4] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004385171s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-420317 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-420317 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-420317 apply -f testdata/storage-provisioner/pod.yaml
I1124 13:22:43.741474  374122 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [2be5fafa-1703-41d8-9b57-c6753cc55dd1] Pending
helpers_test.go:352: "sp-pod" [2be5fafa-1703-41d8-9b57-c6753cc55dd1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [2be5fafa-1703-41d8-9b57-c6753cc55dd1] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004530959s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-420317 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.71s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh -n functional-420317 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 cp functional-420317:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3708621480/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh -n functional-420317 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh -n functional-420317 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-420317 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-vthfh" [4e1ccce0-dac2-44f5-a35a-ac924f4d21d8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-vthfh" [4e1ccce0-dac2-44f5-a35a-ac924f4d21d8] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.004547521s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-420317 exec mysql-5bb876957f-vthfh -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-420317 exec mysql-5bb876957f-vthfh -- mysql -ppassword -e "show databases;": exit status 1 (115.6722ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 13:23:07.352687  374122 retry.go:31] will retry after 1.425257733s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-420317 exec mysql-5bb876957f-vthfh -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-420317 exec mysql-5bb876957f-vthfh -- mysql -ppassword -e "show databases;": exit status 1 (125.922848ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 13:23:08.904882  374122 retry.go:31] will retry after 1.543523019s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-420317 exec mysql-5bb876957f-vthfh -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.54s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/374122/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh "sudo cat /etc/test/nested/copy/374122/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/374122.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh "sudo cat /etc/ssl/certs/374122.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/374122.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh "sudo cat /usr/share/ca-certificates/374122.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3741222.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh "sudo cat /etc/ssl/certs/3741222.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3741222.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh "sudo cat /usr/share/ca-certificates/3741222.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-420317 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-420317 ssh "sudo systemctl is-active docker": exit status 1 (349.506852ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-420317 ssh "sudo systemctl is-active crio": exit status 1 (335.747756ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-420317 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-420317
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-420317
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-420317 image ls --format short --alsologtostderr:
I1124 13:22:50.839303  422054 out.go:360] Setting OutFile to fd 1 ...
I1124 13:22:50.839412  422054 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:22:50.839424  422054 out.go:374] Setting ErrFile to fd 2...
I1124 13:22:50.839430  422054 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:22:50.839645  422054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
I1124 13:22:50.840287  422054 config.go:182] Loaded profile config "functional-420317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 13:22:50.840397  422054 config.go:182] Loaded profile config "functional-420317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 13:22:50.840838  422054 cli_runner.go:164] Run: docker container inspect functional-420317 --format={{.State.Status}}
I1124 13:22:50.859143  422054 ssh_runner.go:195] Run: systemctl --version
I1124 13:22:50.859270  422054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-420317
I1124 13:22:50.877986  422054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/functional-420317/id_rsa Username:docker}
I1124 13:22:50.983549  422054 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-420317 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                     │ latest             │ sha256:60adc2 │ 59.8MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:c80c8d │ 22.8MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:fc2517 │ 26MB   │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ docker.io/library/minikube-local-cache-test │ functional-420317  │ sha256:a18f09 │ 990B   │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:c3994b │ 27.1MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ docker.io/kicbase/echo-server               │ functional-420317  │ sha256:9056ab │ 2.37MB │
│ docker.io/kicbase/echo-server               │ latest             │ sha256:9056ab │ 2.37MB │
│ docker.io/library/nginx                     │ alpine             │ sha256:d4918c │ 22.6MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:7dd6aa │ 17.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-420317 image ls --format table --alsologtostderr:
I1124 13:22:51.664050  422384 out.go:360] Setting OutFile to fd 1 ...
I1124 13:22:51.664303  422384 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:22:51.664314  422384 out.go:374] Setting ErrFile to fd 2...
I1124 13:22:51.664319  422384 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:22:51.664513  422384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
I1124 13:22:51.665138  422384 config.go:182] Loaded profile config "functional-420317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 13:22:51.665246  422384 config.go:182] Loaded profile config "functional-420317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 13:22:51.665840  422384 cli_runner.go:164] Run: docker container inspect functional-420317 --format={{.State.Status}}
I1124 13:22:51.683419  422384 ssh_runner.go:195] Run: systemctl --version
I1124 13:22:51.683469  422384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-420317
I1124 13:22:51.702329  422384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/functional-420317/id_rsa Username:docker}
I1124 13:22:51.804110  422384 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-420317 image ls --format json --alsologtostderr:
[{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDige
sts":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22631814"},{"id":"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"22820214"},{"id":"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"17385568"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:l
atest"],"size":"72306"},{"id":"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"27061991"},{"id":"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"25963718"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-420317","docker.io/kicbase/echo-server:latest"],"size":"2372971"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe00
77a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:a18f09b9edabb38066a9766300db25044c69ac1181543c5eb8489681ba4266bd","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-420317"],"size":"990"},{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908e
b8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"59772801"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-420317 image ls --format json --alsologtostderr:
I1124 13:22:51.415155  422244 out.go:360] Setting OutFile to fd 1 ...
I1124 13:22:51.415270  422244 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:22:51.415278  422244 out.go:374] Setting ErrFile to fd 2...
I1124 13:22:51.415282  422244 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:22:51.415481  422244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
I1124 13:22:51.416098  422244 config.go:182] Loaded profile config "functional-420317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 13:22:51.416200  422244 config.go:182] Loaded profile config "functional-420317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 13:22:51.416610  422244 cli_runner.go:164] Run: docker container inspect functional-420317 --format={{.State.Status}}
I1124 13:22:51.436753  422244 ssh_runner.go:195] Run: systemctl --version
I1124 13:22:51.436815  422244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-420317
I1124 13:22:51.456594  422244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/functional-420317/id_rsa Username:docker}
I1124 13:22:51.559624  422244 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 image ls --format yaml --alsologtostderr
2025/11/24 13:22:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-420317 image ls --format yaml --alsologtostderr:
- id: sha256:a18f09b9edabb38066a9766300db25044c69ac1181543c5eb8489681ba4266bd
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-420317
size: "990"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "27061991"
- id: sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "25963718"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-420317
- docker.io/kicbase/echo-server:latest
size: "2372971"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "22820214"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "22631814"
- id: sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "59772801"
- id: sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "17385568"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-420317 image ls --format yaml --alsologtostderr:
I1124 13:22:51.081753  422165 out.go:360] Setting OutFile to fd 1 ...
I1124 13:22:51.082262  422165 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:22:51.082273  422165 out.go:374] Setting ErrFile to fd 2...
I1124 13:22:51.082279  422165 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:22:51.082517  422165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
I1124 13:22:51.083160  422165 config.go:182] Loaded profile config "functional-420317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 13:22:51.083293  422165 config.go:182] Loaded profile config "functional-420317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 13:22:51.083757  422165 cli_runner.go:164] Run: docker container inspect functional-420317 --format={{.State.Status}}
I1124 13:22:51.102389  422165 ssh_runner.go:195] Run: systemctl --version
I1124 13:22:51.102444  422165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-420317
I1124 13:22:51.121725  422165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/functional-420317/id_rsa Username:docker}
I1124 13:22:51.226612  422165 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-420317 ssh pgrep buildkitd: exit status 1 (292.763363ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 image build -t localhost/my-image:functional-420317 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-420317 image build -t localhost/my-image:functional-420317 testdata/build --alsologtostderr: (4.489992552s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-420317 image build -t localhost/my-image:functional-420317 testdata/build --alsologtostderr:
I1124 13:22:51.619642  422363 out.go:360] Setting OutFile to fd 1 ...
I1124 13:22:51.619944  422363 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:22:51.619957  422363 out.go:374] Setting ErrFile to fd 2...
I1124 13:22:51.619961  422363 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:22:51.620200  422363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
I1124 13:22:51.620830  422363 config.go:182] Loaded profile config "functional-420317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 13:22:51.621471  422363 config.go:182] Loaded profile config "functional-420317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 13:22:51.622021  422363 cli_runner.go:164] Run: docker container inspect functional-420317 --format={{.State.Status}}
I1124 13:22:51.644492  422363 ssh_runner.go:195] Run: systemctl --version
I1124 13:22:51.644556  422363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-420317
I1124 13:22:51.665403  422363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/functional-420317/id_rsa Username:docker}
I1124 13:22:51.767737  422363 build_images.go:162] Building image from path: /tmp/build.2941701193.tar
I1124 13:22:51.767796  422363 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1124 13:22:51.776422  422363 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2941701193.tar
I1124 13:22:51.780416  422363 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2941701193.tar: stat -c "%s %y" /var/lib/minikube/build/build.2941701193.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2941701193.tar': No such file or directory
I1124 13:22:51.780450  422363 ssh_runner.go:362] scp /tmp/build.2941701193.tar --> /var/lib/minikube/build/build.2941701193.tar (3072 bytes)
I1124 13:22:51.801234  422363 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2941701193
I1124 13:22:51.810812  422363 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2941701193 -xf /var/lib/minikube/build/build.2941701193.tar
I1124 13:22:51.820698  422363 containerd.go:394] Building image: /var/lib/minikube/build/build.2941701193
I1124 13:22:51.820800  422363 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2941701193 --local dockerfile=/var/lib/minikube/build/build.2941701193 --output type=image,name=localhost/my-image:functional-420317
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.2s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:bf9e06d90fd9b3c7b23eaad719e17b71410cb8267311fdcf97c9c3ac2dc40847 0.0s done
#8 exporting config sha256:a4ced390c38024b16a106689b612235ce21f1819ee6d79ed91e8bfba4da36151 done
#8 naming to localhost/my-image:functional-420317
#8 naming to localhost/my-image:functional-420317 done
#8 DONE 0.1s
I1124 13:22:56.014855  422363 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2941701193 --local dockerfile=/var/lib/minikube/build/build.2941701193 --output type=image,name=localhost/my-image:functional-420317: (4.194006281s)
I1124 13:22:56.014969  422363 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2941701193
I1124 13:22:56.025707  422363 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2941701193.tar
I1124 13:22:56.036615  422363 build_images.go:218] Built localhost/my-image:functional-420317 from /tmp/build.2941701193.tar
I1124 13:22:56.036662  422363 build_images.go:134] succeeded building to: functional-420317
I1124 13:22:56.036669  422363 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.732418138s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-420317
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-420317 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-420317 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-xcw4c" [8f319ad0-3c0c-4697-9d89-b223d6f8e61a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-xcw4c" [8f319ad0-3c0c-4697-9d89-b223d6f8e61a] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003515457s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-420317 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-420317 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-420317 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-420317 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 415200: os: process already finished
helpers_test.go:525: unable to kill pid 414893: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 image load --daemon kicbase/echo-server:functional-420317 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-420317 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-420317 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [549757d7-e535-4723-84e8-64a63421bc71] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [549757d7-e535-4723-84e8-64a63421bc71] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.004142701s
I1124 13:22:38.345145  374122 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 image load --daemon kicbase/echo-server:functional-420317 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-420317
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 image load --daemon kicbase/echo-server:functional-420317 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 image save kicbase/echo-server:functional-420317 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 image rm kicbase/echo-server:functional-420317 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-420317
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 image save --daemon kicbase/echo-server:functional-420317 --alsologtostderr
I1124 13:22:32.601331  374122 detect.go:223] nested VM detected
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-420317
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 service list -o json
functional_test.go:1504: Took "538.494035ms" to run "out/minikube-linux-amd64 -p functional-420317 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31260
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31260
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-420317 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.64.101 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-420317 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-420317 /tmp/TestFunctionalparallelMountCmdany-port2632013050/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763990558519683508" to /tmp/TestFunctionalparallelMountCmdany-port2632013050/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763990558519683508" to /tmp/TestFunctionalparallelMountCmdany-port2632013050/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763990558519683508" to /tmp/TestFunctionalparallelMountCmdany-port2632013050/001/test-1763990558519683508
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-420317 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (327.41004ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 13:22:38.847536  374122 retry.go:31] will retry after 257.096979ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 24 13:22 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 24 13:22 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 24 13:22 test-1763990558519683508
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh cat /mount-9p/test-1763990558519683508
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-420317 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [497330d4-8a26-4709-b948-4a53b5b05810] Pending
helpers_test.go:352: "busybox-mount" [497330d4-8a26-4709-b948-4a53b5b05810] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [497330d4-8a26-4709-b948-4a53b5b05810] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [497330d4-8a26-4709-b948-4a53b5b05810] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003880754s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-420317 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-420317 /tmp/TestFunctionalparallelMountCmdany-port2632013050/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.83s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "376.933238ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "67.429966ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "384.131593ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "74.423784ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-420317 /tmp/TestFunctionalparallelMountCmdspecific-port3709693466/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-420317 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (344.205798ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 13:22:46.690265  374122 retry.go:31] will retry after 547.636967ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-420317 /tmp/TestFunctionalparallelMountCmdspecific-port3709693466/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-420317 ssh "sudo umount -f /mount-9p": exit status 1 (299.118527ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-420317 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-420317 /tmp/TestFunctionalparallelMountCmdspecific-port3709693466/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-420317 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3474468665/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-420317 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3474468665/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-420317 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3474468665/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-420317 ssh "findmnt -T" /mount1: exit status 1 (370.073982ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 13:22:48.764054  374122 retry.go:31] will retry after 455.616799ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-420317 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-420317 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-420317 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3474468665/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-420317 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3474468665/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-420317 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3474468665/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-420317
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-420317
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-420317
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (114.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1124 13:24:00.049402  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-123336 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m53.417415885s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (114.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-123336 kubectl -- rollout status deployment/busybox: (3.536035103s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 kubectl -- exec busybox-7b57f96db7-bj5wl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 kubectl -- exec busybox-7b57f96db7-qmmhg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 kubectl -- exec busybox-7b57f96db7-zgxvk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 kubectl -- exec busybox-7b57f96db7-bj5wl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 kubectl -- exec busybox-7b57f96db7-qmmhg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 kubectl -- exec busybox-7b57f96db7-zgxvk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 kubectl -- exec busybox-7b57f96db7-bj5wl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 kubectl -- exec busybox-7b57f96db7-qmmhg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 kubectl -- exec busybox-7b57f96db7-zgxvk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 kubectl -- exec busybox-7b57f96db7-bj5wl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 kubectl -- exec busybox-7b57f96db7-bj5wl -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 kubectl -- exec busybox-7b57f96db7-qmmhg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 kubectl -- exec busybox-7b57f96db7-qmmhg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 kubectl -- exec busybox-7b57f96db7-zgxvk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 kubectl -- exec busybox-7b57f96db7-zgxvk -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-123336 node add --alsologtostderr -v 5: (26.326841593s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-123336 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 cp testdata/cp-test.txt ha-123336:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 cp ha-123336:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2496057007/001/cp-test_ha-123336.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 cp ha-123336:/home/docker/cp-test.txt ha-123336-m02:/home/docker/cp-test_ha-123336_ha-123336-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m02 "sudo cat /home/docker/cp-test_ha-123336_ha-123336-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 cp ha-123336:/home/docker/cp-test.txt ha-123336-m03:/home/docker/cp-test_ha-123336_ha-123336-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m03 "sudo cat /home/docker/cp-test_ha-123336_ha-123336-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 cp ha-123336:/home/docker/cp-test.txt ha-123336-m04:/home/docker/cp-test_ha-123336_ha-123336-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m04 "sudo cat /home/docker/cp-test_ha-123336_ha-123336-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 cp testdata/cp-test.txt ha-123336-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 cp ha-123336-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2496057007/001/cp-test_ha-123336-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 cp ha-123336-m02:/home/docker/cp-test.txt ha-123336:/home/docker/cp-test_ha-123336-m02_ha-123336.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336 "sudo cat /home/docker/cp-test_ha-123336-m02_ha-123336.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 cp ha-123336-m02:/home/docker/cp-test.txt ha-123336-m03:/home/docker/cp-test_ha-123336-m02_ha-123336-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m03 "sudo cat /home/docker/cp-test_ha-123336-m02_ha-123336-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 cp ha-123336-m02:/home/docker/cp-test.txt ha-123336-m04:/home/docker/cp-test_ha-123336-m02_ha-123336-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m04 "sudo cat /home/docker/cp-test_ha-123336-m02_ha-123336-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 cp testdata/cp-test.txt ha-123336-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 cp ha-123336-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2496057007/001/cp-test_ha-123336-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 cp ha-123336-m03:/home/docker/cp-test.txt ha-123336:/home/docker/cp-test_ha-123336-m03_ha-123336.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336 "sudo cat /home/docker/cp-test_ha-123336-m03_ha-123336.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 cp ha-123336-m03:/home/docker/cp-test.txt ha-123336-m02:/home/docker/cp-test_ha-123336-m03_ha-123336-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m02 "sudo cat /home/docker/cp-test_ha-123336-m03_ha-123336-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 cp ha-123336-m03:/home/docker/cp-test.txt ha-123336-m04:/home/docker/cp-test_ha-123336-m03_ha-123336-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m04 "sudo cat /home/docker/cp-test_ha-123336-m03_ha-123336-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 cp testdata/cp-test.txt ha-123336-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 cp ha-123336-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2496057007/001/cp-test_ha-123336-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 cp ha-123336-m04:/home/docker/cp-test.txt ha-123336:/home/docker/cp-test_ha-123336-m04_ha-123336.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336 "sudo cat /home/docker/cp-test_ha-123336-m04_ha-123336.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 cp ha-123336-m04:/home/docker/cp-test.txt ha-123336-m02:/home/docker/cp-test_ha-123336-m04_ha-123336-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m02 "sudo cat /home/docker/cp-test_ha-123336-m04_ha-123336-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 cp ha-123336-m04:/home/docker/cp-test.txt ha-123336-m03:/home/docker/cp-test_ha-123336-m04_ha-123336-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 ssh -n ha-123336-m03 "sudo cat /home/docker/cp-test_ha-123336-m04_ha-123336-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-123336 node stop m02 --alsologtostderr -v 5: (12.072416476s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-123336 status --alsologtostderr -v 5: exit status 7 (731.836623ms)

                                                
                                                
-- stdout --
	ha-123336
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-123336-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-123336-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-123336-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:26:14.078870  443889 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:26:14.079014  443889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:26:14.079027  443889 out.go:374] Setting ErrFile to fd 2...
	I1124 13:26:14.079033  443889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:26:14.079257  443889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
	I1124 13:26:14.079436  443889 out.go:368] Setting JSON to false
	I1124 13:26:14.079474  443889 mustload.go:66] Loading cluster: ha-123336
	I1124 13:26:14.079699  443889 notify.go:221] Checking for updates...
	I1124 13:26:14.079840  443889 config.go:182] Loaded profile config "ha-123336": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:26:14.079861  443889 status.go:174] checking status of ha-123336 ...
	I1124 13:26:14.080382  443889 cli_runner.go:164] Run: docker container inspect ha-123336 --format={{.State.Status}}
	I1124 13:26:14.100029  443889 status.go:371] ha-123336 host status = "Running" (err=<nil>)
	I1124 13:26:14.100097  443889 host.go:66] Checking if "ha-123336" exists ...
	I1124 13:26:14.100446  443889 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-123336
	I1124 13:26:14.121016  443889 host.go:66] Checking if "ha-123336" exists ...
	I1124 13:26:14.121370  443889 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:26:14.121423  443889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-123336
	I1124 13:26:14.140953  443889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/ha-123336/id_rsa Username:docker}
	I1124 13:26:14.242878  443889 ssh_runner.go:195] Run: systemctl --version
	I1124 13:26:14.249575  443889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:26:14.263092  443889 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:26:14.321295  443889 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-24 13:26:14.310932558 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:26:14.321828  443889 kubeconfig.go:125] found "ha-123336" server: "https://192.168.49.254:8443"
	I1124 13:26:14.321859  443889 api_server.go:166] Checking apiserver status ...
	I1124 13:26:14.321893  443889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:26:14.335282  443889 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1357/cgroup
	W1124 13:26:14.344162  443889 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1357/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 13:26:14.344246  443889 ssh_runner.go:195] Run: ls
	I1124 13:26:14.348189  443889 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1124 13:26:14.352469  443889 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1124 13:26:14.352498  443889 status.go:463] ha-123336 apiserver status = Running (err=<nil>)
	I1124 13:26:14.352509  443889 status.go:176] ha-123336 status: &{Name:ha-123336 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:26:14.352532  443889 status.go:174] checking status of ha-123336-m02 ...
	I1124 13:26:14.352793  443889 cli_runner.go:164] Run: docker container inspect ha-123336-m02 --format={{.State.Status}}
	I1124 13:26:14.370980  443889 status.go:371] ha-123336-m02 host status = "Stopped" (err=<nil>)
	I1124 13:26:14.371007  443889 status.go:384] host is not running, skipping remaining checks
	I1124 13:26:14.371016  443889 status.go:176] ha-123336-m02 status: &{Name:ha-123336-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:26:14.371044  443889 status.go:174] checking status of ha-123336-m03 ...
	I1124 13:26:14.371350  443889 cli_runner.go:164] Run: docker container inspect ha-123336-m03 --format={{.State.Status}}
	I1124 13:26:14.389714  443889 status.go:371] ha-123336-m03 host status = "Running" (err=<nil>)
	I1124 13:26:14.389740  443889 host.go:66] Checking if "ha-123336-m03" exists ...
	I1124 13:26:14.390105  443889 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-123336-m03
	I1124 13:26:14.408124  443889 host.go:66] Checking if "ha-123336-m03" exists ...
	I1124 13:26:14.408398  443889 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:26:14.408466  443889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-123336-m03
	I1124 13:26:14.426811  443889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/ha-123336-m03/id_rsa Username:docker}
	I1124 13:26:14.528682  443889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:26:14.542283  443889 kubeconfig.go:125] found "ha-123336" server: "https://192.168.49.254:8443"
	I1124 13:26:14.542311  443889 api_server.go:166] Checking apiserver status ...
	I1124 13:26:14.542353  443889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:26:14.553990  443889 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1313/cgroup
	W1124 13:26:14.562849  443889 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1313/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 13:26:14.562935  443889 ssh_runner.go:195] Run: ls
	I1124 13:26:14.566985  443889 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1124 13:26:14.571161  443889 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1124 13:26:14.571191  443889 status.go:463] ha-123336-m03 apiserver status = Running (err=<nil>)
	I1124 13:26:14.571199  443889 status.go:176] ha-123336-m03 status: &{Name:ha-123336-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:26:14.571215  443889 status.go:174] checking status of ha-123336-m04 ...
	I1124 13:26:14.571490  443889 cli_runner.go:164] Run: docker container inspect ha-123336-m04 --format={{.State.Status}}
	I1124 13:26:14.590278  443889 status.go:371] ha-123336-m04 host status = "Running" (err=<nil>)
	I1124 13:26:14.590304  443889 host.go:66] Checking if "ha-123336-m04" exists ...
	I1124 13:26:14.590587  443889 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-123336-m04
	I1124 13:26:14.608260  443889 host.go:66] Checking if "ha-123336-m04" exists ...
	I1124 13:26:14.608517  443889 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:26:14.608560  443889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-123336-m04
	I1124 13:26:14.630386  443889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/ha-123336-m04/id_rsa Username:docker}
	I1124 13:26:14.731513  443889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:26:14.745489  443889 status.go:176] ha-123336-m04 status: &{Name:ha-123336-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 node start m02 --alsologtostderr -v 5
E1124 13:26:16.186965  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-123336 node start m02 --alsologtostderr -v 5: (8.104317124s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (96.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 stop --alsologtostderr -v 5
E1124 13:26:43.894157  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-123336 stop --alsologtostderr -v 5: (37.347320896s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 start --wait true --alsologtostderr -v 5
E1124 13:27:25.849793  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:27:25.856237  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:27:25.867753  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:27:25.889289  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:27:25.930825  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:27:26.012367  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:27:26.173896  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:27:26.495833  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:27:27.137501  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:27:28.419183  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:27:30.981146  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:27:36.103270  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:27:46.345481  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-123336 start --wait true --alsologtostderr -v 5: (58.656121884s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (96.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 node delete m03 --alsologtostderr -v 5
E1124 13:28:06.826965  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-123336 node delete m03 --alsologtostderr -v 5: (8.715287977s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 stop --alsologtostderr -v 5
E1124 13:28:47.789200  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-123336 stop --alsologtostderr -v 5: (36.143766619s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-123336 status --alsologtostderr -v 5: exit status 7 (122.532931ms)

                                                
                                                
-- stdout --
	ha-123336
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-123336-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-123336-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:28:48.220957  460179 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:28:48.221223  460179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:28:48.221233  460179 out.go:374] Setting ErrFile to fd 2...
	I1124 13:28:48.221236  460179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:28:48.221882  460179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
	I1124 13:28:48.222523  460179 out.go:368] Setting JSON to false
	I1124 13:28:48.222574  460179 mustload.go:66] Loading cluster: ha-123336
	I1124 13:28:48.222685  460179 notify.go:221] Checking for updates...
	I1124 13:28:48.223329  460179 config.go:182] Loaded profile config "ha-123336": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:28:48.223357  460179 status.go:174] checking status of ha-123336 ...
	I1124 13:28:48.223797  460179 cli_runner.go:164] Run: docker container inspect ha-123336 --format={{.State.Status}}
	I1124 13:28:48.242456  460179 status.go:371] ha-123336 host status = "Stopped" (err=<nil>)
	I1124 13:28:48.242481  460179 status.go:384] host is not running, skipping remaining checks
	I1124 13:28:48.242495  460179 status.go:176] ha-123336 status: &{Name:ha-123336 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:28:48.242565  460179 status.go:174] checking status of ha-123336-m02 ...
	I1124 13:28:48.242962  460179 cli_runner.go:164] Run: docker container inspect ha-123336-m02 --format={{.State.Status}}
	I1124 13:28:48.260725  460179 status.go:371] ha-123336-m02 host status = "Stopped" (err=<nil>)
	I1124 13:28:48.260753  460179 status.go:384] host is not running, skipping remaining checks
	I1124 13:28:48.260762  460179 status.go:176] ha-123336-m02 status: &{Name:ha-123336-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:28:48.260805  460179 status.go:174] checking status of ha-123336-m04 ...
	I1124 13:28:48.261147  460179 cli_runner.go:164] Run: docker container inspect ha-123336-m04 --format={{.State.Status}}
	I1124 13:28:48.279848  460179 status.go:371] ha-123336-m04 host status = "Stopped" (err=<nil>)
	I1124 13:28:48.279871  460179 status.go:384] host is not running, skipping remaining checks
	I1124 13:28:48.279879  460179 status.go:176] ha-123336-m04 status: &{Name:ha-123336-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (55.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-123336 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (54.402833868s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (55.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (46.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 node add --control-plane --alsologtostderr -v 5
E1124 13:30:09.712242  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-123336 node add --control-plane --alsologtostderr -v 5: (45.385072679s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-123336 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (46.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.21s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-566830 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E1124 13:31:16.186539  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-566830 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (41.210479975s)
--- PASS: TestJSONOutput/start/Command (41.21s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-566830 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-566830 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.89s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-566830 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-566830 --output=json --user=testUser: (5.890261955s)
--- PASS: TestJSONOutput/stop/Command (5.89s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-013127 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-013127 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (88.408542ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c67ef69b-05bc-4178-8155-a7cfa2ba0ef0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-013127] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9ab8a87f-0058-4a18-b1ba-8e33f835177d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21932"}}
	{"specversion":"1.0","id":"08d9359f-5ae8-4850-b5b8-301e40a3d33f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d4391705-b18d-4cc0-b042-7c0f906e5ee8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21932-370498/kubeconfig"}}
	{"specversion":"1.0","id":"2abd108b-feb1-4bd0-b22b-0826a5631a79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-370498/.minikube"}}
	{"specversion":"1.0","id":"da704876-046a-4100-baa4-4c9fbc6157ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7052f37b-5223-4679-bffe-1a5c3f8b0ca4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2464435f-73f3-4050-abfd-928c29f66fd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-013127" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-013127
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.09s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-925417 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-925417 --network=: (35.907074368s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-925417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-925417
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-925417: (2.161871629s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.09s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.93s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-948931 --network=bridge
E1124 13:32:25.854095  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-948931 --network=bridge: (20.870968093s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-948931" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-948931
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-948931: (2.041511634s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.93s)

                                                
                                    
x
+
TestKicExistingNetwork (23.67s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1124 13:32:33.797679  374122 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1124 13:32:33.814647  374122 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1124 13:32:33.814714  374122 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1124 13:32:33.814742  374122 cli_runner.go:164] Run: docker network inspect existing-network
W1124 13:32:33.831181  374122 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1124 13:32:33.831215  374122 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1124 13:32:33.831230  374122 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1124 13:32:33.831343  374122 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1124 13:32:33.849002  374122 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8afb578efdfa IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:5e:46:43:aa:fe} reservation:<nil>}
I1124 13:32:33.849407  374122 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021aafb0}
I1124 13:32:33.849433  374122 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1124 13:32:33.849502  374122 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1124 13:32:33.899765  374122 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-794138 --network=existing-network
E1124 13:32:53.560764  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-794138 --network=existing-network: (21.492132316s)
helpers_test.go:175: Cleaning up "existing-network-794138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-794138
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-794138: (2.040509817s)
I1124 13:32:57.453999  374122 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.67s)

                                                
                                    
x
+
TestKicCustomSubnet (27.55s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-323762 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-323762 --subnet=192.168.60.0/24: (25.339890251s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-323762 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-323762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-323762
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-323762: (2.193420413s)
--- PASS: TestKicCustomSubnet (27.55s)

                                                
                                    
x
+
TestKicStaticIP (24.2s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-826058 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-826058 --static-ip=192.168.200.200: (21.871839486s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-826058 ip
helpers_test.go:175: Cleaning up "static-ip-826058" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-826058
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-826058: (2.169735115s)
--- PASS: TestKicStaticIP (24.20s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (54.01s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-340635 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-340635 --driver=docker  --container-runtime=containerd: (24.095731652s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-342798 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-342798 --driver=docker  --container-runtime=containerd: (23.7967705s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-340635
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-342798
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-342798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-342798
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-342798: (2.386057468s)
helpers_test.go:175: Cleaning up "first-340635" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-340635
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-340635: (2.431681953s)
--- PASS: TestMinikubeProfile (54.01s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-014540 --memory=3072 --mount-string /tmp/TestMountStartserial1650576596/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-014540 --memory=3072 --mount-string /tmp/TestMountStartserial1650576596/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.533094498s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-014540 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-031148 --memory=3072 --mount-string /tmp/TestMountStartserial1650576596/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-031148 --memory=3072 --mount-string /tmp/TestMountStartserial1650576596/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.68514984s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-031148 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-014540 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-014540 --alsologtostderr -v=5: (1.711266464s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-031148 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-031148
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-031148: (1.27300786s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.82s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-031148
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-031148: (6.820291695s)
--- PASS: TestMountStart/serial/RestartStopped (7.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-031148 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-754371 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-754371 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m3.552402183s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 status --alsologtostderr
E1124 13:36:16.187066  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-754371 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-754371 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-754371 -- rollout status deployment/busybox: (3.476505805s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-754371 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-754371 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-754371 -- exec busybox-7b57f96db7-dbfmb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-754371 -- exec busybox-7b57f96db7-xr8pc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-754371 -- exec busybox-7b57f96db7-dbfmb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-754371 -- exec busybox-7b57f96db7-xr8pc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-754371 -- exec busybox-7b57f96db7-dbfmb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-754371 -- exec busybox-7b57f96db7-xr8pc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.03s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-754371 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-754371 -- exec busybox-7b57f96db7-dbfmb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-754371 -- exec busybox-7b57f96db7-dbfmb -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-754371 -- exec busybox-7b57f96db7-xr8pc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-754371 -- exec busybox-7b57f96db7-xr8pc -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-754371 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-754371 -v=5 --alsologtostderr: (23.5597245s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (24.23s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-754371 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 cp testdata/cp-test.txt multinode-754371:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 ssh -n multinode-754371 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 cp multinode-754371:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1445165228/001/cp-test_multinode-754371.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 ssh -n multinode-754371 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 cp multinode-754371:/home/docker/cp-test.txt multinode-754371-m02:/home/docker/cp-test_multinode-754371_multinode-754371-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 ssh -n multinode-754371 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 ssh -n multinode-754371-m02 "sudo cat /home/docker/cp-test_multinode-754371_multinode-754371-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 cp multinode-754371:/home/docker/cp-test.txt multinode-754371-m03:/home/docker/cp-test_multinode-754371_multinode-754371-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 ssh -n multinode-754371 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 ssh -n multinode-754371-m03 "sudo cat /home/docker/cp-test_multinode-754371_multinode-754371-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 cp testdata/cp-test.txt multinode-754371-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 ssh -n multinode-754371-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 cp multinode-754371-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1445165228/001/cp-test_multinode-754371-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 ssh -n multinode-754371-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 cp multinode-754371-m02:/home/docker/cp-test.txt multinode-754371:/home/docker/cp-test_multinode-754371-m02_multinode-754371.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 ssh -n multinode-754371-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 ssh -n multinode-754371 "sudo cat /home/docker/cp-test_multinode-754371-m02_multinode-754371.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 cp multinode-754371-m02:/home/docker/cp-test.txt multinode-754371-m03:/home/docker/cp-test_multinode-754371-m02_multinode-754371-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 ssh -n multinode-754371-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 ssh -n multinode-754371-m03 "sudo cat /home/docker/cp-test_multinode-754371-m02_multinode-754371-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 cp testdata/cp-test.txt multinode-754371-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 ssh -n multinode-754371-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 cp multinode-754371-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1445165228/001/cp-test_multinode-754371-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 ssh -n multinode-754371-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 cp multinode-754371-m03:/home/docker/cp-test.txt multinode-754371:/home/docker/cp-test_multinode-754371-m03_multinode-754371.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 ssh -n multinode-754371-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 ssh -n multinode-754371 "sudo cat /home/docker/cp-test_multinode-754371-m03_multinode-754371.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 cp multinode-754371-m03:/home/docker/cp-test.txt multinode-754371-m02:/home/docker/cp-test_multinode-754371-m03_multinode-754371-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 ssh -n multinode-754371-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 ssh -n multinode-754371-m02 "sudo cat /home/docker/cp-test_multinode-754371-m03_multinode-754371-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-754371 node stop m03: (1.271361269s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-754371 status: exit status 7 (532.345745ms)

                                                
                                                
-- stdout --
	multinode-754371
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-754371-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-754371-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-754371 status --alsologtostderr: exit status 7 (531.700012ms)

                                                
                                                
-- stdout --
	multinode-754371
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-754371-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-754371-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:36:59.442965  522284 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:36:59.443442  522284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:36:59.443454  522284 out.go:374] Setting ErrFile to fd 2...
	I1124 13:36:59.443459  522284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:36:59.443689  522284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
	I1124 13:36:59.443870  522284 out.go:368] Setting JSON to false
	I1124 13:36:59.443900  522284 mustload.go:66] Loading cluster: multinode-754371
	I1124 13:36:59.444013  522284 notify.go:221] Checking for updates...
	I1124 13:36:59.444409  522284 config.go:182] Loaded profile config "multinode-754371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:36:59.444436  522284 status.go:174] checking status of multinode-754371 ...
	I1124 13:36:59.444891  522284 cli_runner.go:164] Run: docker container inspect multinode-754371 --format={{.State.Status}}
	I1124 13:36:59.463070  522284 status.go:371] multinode-754371 host status = "Running" (err=<nil>)
	I1124 13:36:59.463123  522284 host.go:66] Checking if "multinode-754371" exists ...
	I1124 13:36:59.463468  522284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-754371
	I1124 13:36:59.483776  522284 host.go:66] Checking if "multinode-754371" exists ...
	I1124 13:36:59.484112  522284 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:36:59.484157  522284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-754371
	I1124 13:36:59.503731  522284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33283 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/multinode-754371/id_rsa Username:docker}
	I1124 13:36:59.604880  522284 ssh_runner.go:195] Run: systemctl --version
	I1124 13:36:59.611726  522284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:36:59.624863  522284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:36:59.686027  522284 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-24 13:36:59.676102856 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:36:59.686615  522284 kubeconfig.go:125] found "multinode-754371" server: "https://192.168.67.2:8443"
	I1124 13:36:59.686646  522284 api_server.go:166] Checking apiserver status ...
	I1124 13:36:59.686686  522284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:36:59.699751  522284 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1341/cgroup
	W1124 13:36:59.709521  522284 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1341/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 13:36:59.709584  522284 ssh_runner.go:195] Run: ls
	I1124 13:36:59.714441  522284 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1124 13:36:59.720340  522284 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1124 13:36:59.720368  522284 status.go:463] multinode-754371 apiserver status = Running (err=<nil>)
	I1124 13:36:59.720378  522284 status.go:176] multinode-754371 status: &{Name:multinode-754371 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:36:59.720396  522284 status.go:174] checking status of multinode-754371-m02 ...
	I1124 13:36:59.720649  522284 cli_runner.go:164] Run: docker container inspect multinode-754371-m02 --format={{.State.Status}}
	I1124 13:36:59.739663  522284 status.go:371] multinode-754371-m02 host status = "Running" (err=<nil>)
	I1124 13:36:59.739691  522284 host.go:66] Checking if "multinode-754371-m02" exists ...
	I1124 13:36:59.740021  522284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-754371-m02
	I1124 13:36:59.758846  522284 host.go:66] Checking if "multinode-754371-m02" exists ...
	I1124 13:36:59.759232  522284 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:36:59.759295  522284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-754371-m02
	I1124 13:36:59.778133  522284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33288 SSHKeyPath:/home/jenkins/minikube-integration/21932-370498/.minikube/machines/multinode-754371-m02/id_rsa Username:docker}
	I1124 13:36:59.879437  522284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:36:59.892863  522284 status.go:176] multinode-754371-m02 status: &{Name:multinode-754371-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:36:59.892959  522284 status.go:174] checking status of multinode-754371-m03 ...
	I1124 13:36:59.893215  522284 cli_runner.go:164] Run: docker container inspect multinode-754371-m03 --format={{.State.Status}}
	I1124 13:36:59.910788  522284 status.go:371] multinode-754371-m03 host status = "Stopped" (err=<nil>)
	I1124 13:36:59.910811  522284 status.go:384] host is not running, skipping remaining checks
	I1124 13:36:59.910818  522284 status.go:176] multinode-754371-m03 status: &{Name:multinode-754371-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-754371 node start m03 -v=5 --alsologtostderr: (6.356949014s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (71.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-754371
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-754371
E1124 13:37:25.854006  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-754371: (25.09175238s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-754371 --wait=true -v=5 --alsologtostderr
E1124 13:37:39.256073  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-754371 --wait=true -v=5 --alsologtostderr: (46.675283043s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-754371
--- PASS: TestMultiNode/serial/RestartKeepsNodes (71.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-754371 node delete m03: (4.712129431s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-754371 stop: (23.90002569s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-754371 status: exit status 7 (102.798995ms)

                                                
                                                
-- stdout --
	multinode-754371
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-754371-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-754371 status --alsologtostderr: exit status 7 (101.752013ms)

                                                
                                                
-- stdout --
	multinode-754371
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-754371-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:38:48.329839  531996 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:38:48.330142  531996 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:38:48.330153  531996 out.go:374] Setting ErrFile to fd 2...
	I1124 13:38:48.330157  531996 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:38:48.330371  531996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
	I1124 13:38:48.330542  531996 out.go:368] Setting JSON to false
	I1124 13:38:48.330574  531996 mustload.go:66] Loading cluster: multinode-754371
	I1124 13:38:48.330688  531996 notify.go:221] Checking for updates...
	I1124 13:38:48.331196  531996 config.go:182] Loaded profile config "multinode-754371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:38:48.331226  531996 status.go:174] checking status of multinode-754371 ...
	I1124 13:38:48.331742  531996 cli_runner.go:164] Run: docker container inspect multinode-754371 --format={{.State.Status}}
	I1124 13:38:48.349869  531996 status.go:371] multinode-754371 host status = "Stopped" (err=<nil>)
	I1124 13:38:48.349938  531996 status.go:384] host is not running, skipping remaining checks
	I1124 13:38:48.349967  531996 status.go:176] multinode-754371 status: &{Name:multinode-754371 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:38:48.350015  531996 status.go:174] checking status of multinode-754371-m02 ...
	I1124 13:38:48.350303  531996 cli_runner.go:164] Run: docker container inspect multinode-754371-m02 --format={{.State.Status}}
	I1124 13:38:48.367979  531996 status.go:371] multinode-754371-m02 host status = "Stopped" (err=<nil>)
	I1124 13:38:48.368010  531996 status.go:384] host is not running, skipping remaining checks
	I1124 13:38:48.368016  531996 status.go:176] multinode-754371-m02 status: &{Name:multinode-754371-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-754371 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-754371 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (46.450640566s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-754371 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.08s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-754371
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-754371-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-754371-m02 --driver=docker  --container-runtime=containerd: exit status 14 (84.359336ms)

                                                
                                                
-- stdout --
	* [multinode-754371-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-370498/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-370498/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-754371-m02' is duplicated with machine name 'multinode-754371-m02' in profile 'multinode-754371'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-754371-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-754371-m03 --driver=docker  --container-runtime=containerd: (23.395872444s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-754371
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-754371: exit status 80 (312.630956ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-754371 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-754371-m03 already exists in multinode-754371-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-754371-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-754371-m03: (2.428526225s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.28s)

                                                
                                    
x
+
TestPreload (121.91s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-545750 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-545750 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (50.797136242s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-545750 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-545750 image pull gcr.io/k8s-minikube/busybox: (2.791895018s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-545750
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-545750: (5.736579031s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-545750 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1124 13:41:16.186474  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-545750 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (59.81843496s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-545750 image list
helpers_test.go:175: Cleaning up "test-preload-545750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-545750
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-545750: (2.516319127s)
--- PASS: TestPreload (121.91s)

                                                
                                    
x
+
TestScheduledStopUnix (99.46s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-788479 --memory=3072 --driver=docker  --container-runtime=containerd
E1124 13:42:25.855203  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-788479 --memory=3072 --driver=docker  --container-runtime=containerd: (23.176946936s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-788479 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 13:42:31.213877  550292 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:42:31.214184  550292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:42:31.214202  550292 out.go:374] Setting ErrFile to fd 2...
	I1124 13:42:31.214206  550292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:42:31.214409  550292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
	I1124 13:42:31.214669  550292 out.go:368] Setting JSON to false
	I1124 13:42:31.214765  550292 mustload.go:66] Loading cluster: scheduled-stop-788479
	I1124 13:42:31.215120  550292 config.go:182] Loaded profile config "scheduled-stop-788479": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:42:31.215205  550292 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/scheduled-stop-788479/config.json ...
	I1124 13:42:31.215392  550292 mustload.go:66] Loading cluster: scheduled-stop-788479
	I1124 13:42:31.215492  550292 config.go:182] Loaded profile config "scheduled-stop-788479": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-788479 -n scheduled-stop-788479
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-788479 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 13:42:31.631025  550439 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:42:31.631326  550439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:42:31.631337  550439 out.go:374] Setting ErrFile to fd 2...
	I1124 13:42:31.631343  550439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:42:31.631564  550439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
	I1124 13:42:31.631832  550439 out.go:368] Setting JSON to false
	I1124 13:42:31.632112  550439 daemonize_unix.go:73] killing process 550325 as it is an old scheduled stop
	I1124 13:42:31.632259  550439 mustload.go:66] Loading cluster: scheduled-stop-788479
	I1124 13:42:31.632672  550439 config.go:182] Loaded profile config "scheduled-stop-788479": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:42:31.632753  550439 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/scheduled-stop-788479/config.json ...
	I1124 13:42:31.632992  550439 mustload.go:66] Loading cluster: scheduled-stop-788479
	I1124 13:42:31.633215  550439 config.go:182] Loaded profile config "scheduled-stop-788479": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1124 13:42:31.638321  374122 retry.go:31] will retry after 84.734µs: open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/scheduled-stop-788479/pid: no such file or directory
I1124 13:42:31.639512  374122 retry.go:31] will retry after 206.908µs: open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/scheduled-stop-788479/pid: no such file or directory
I1124 13:42:31.640688  374122 retry.go:31] will retry after 313.463µs: open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/scheduled-stop-788479/pid: no such file or directory
I1124 13:42:31.641857  374122 retry.go:31] will retry after 318.684µs: open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/scheduled-stop-788479/pid: no such file or directory
I1124 13:42:31.642985  374122 retry.go:31] will retry after 255.547µs: open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/scheduled-stop-788479/pid: no such file or directory
I1124 13:42:31.644176  374122 retry.go:31] will retry after 677.895µs: open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/scheduled-stop-788479/pid: no such file or directory
I1124 13:42:31.645326  374122 retry.go:31] will retry after 831.902µs: open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/scheduled-stop-788479/pid: no such file or directory
I1124 13:42:31.646485  374122 retry.go:31] will retry after 866.103µs: open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/scheduled-stop-788479/pid: no such file or directory
I1124 13:42:31.647626  374122 retry.go:31] will retry after 3.745235ms: open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/scheduled-stop-788479/pid: no such file or directory
I1124 13:42:31.651839  374122 retry.go:31] will retry after 2.707672ms: open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/scheduled-stop-788479/pid: no such file or directory
I1124 13:42:31.655195  374122 retry.go:31] will retry after 4.992298ms: open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/scheduled-stop-788479/pid: no such file or directory
I1124 13:42:31.660495  374122 retry.go:31] will retry after 7.884986ms: open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/scheduled-stop-788479/pid: no such file or directory
I1124 13:42:31.668782  374122 retry.go:31] will retry after 18.491624ms: open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/scheduled-stop-788479/pid: no such file or directory
I1124 13:42:31.688085  374122 retry.go:31] will retry after 11.018226ms: open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/scheduled-stop-788479/pid: no such file or directory
I1124 13:42:31.699340  374122 retry.go:31] will retry after 39.61411ms: open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/scheduled-stop-788479/pid: no such file or directory
I1124 13:42:31.739647  374122 retry.go:31] will retry after 36.153453ms: open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/scheduled-stop-788479/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-788479 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-788479 -n scheduled-stop-788479
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-788479
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-788479 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 13:42:57.573259  551324 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:42:57.573509  551324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:42:57.573517  551324 out.go:374] Setting ErrFile to fd 2...
	I1124 13:42:57.573522  551324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:42:57.573719  551324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
	I1124 13:42:57.573979  551324 out.go:368] Setting JSON to false
	I1124 13:42:57.574068  551324 mustload.go:66] Loading cluster: scheduled-stop-788479
	I1124 13:42:57.574391  551324 config.go:182] Loaded profile config "scheduled-stop-788479": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:42:57.574455  551324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/scheduled-stop-788479/config.json ...
	I1124 13:42:57.574640  551324 mustload.go:66] Loading cluster: scheduled-stop-788479
	I1124 13:42:57.574734  551324 config.go:182] Loaded profile config "scheduled-stop-788479": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-788479
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-788479: exit status 7 (87.581887ms)

                                                
                                                
-- stdout --
	scheduled-stop-788479
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-788479 -n scheduled-stop-788479
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-788479 -n scheduled-stop-788479: exit status 7 (84.844386ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-788479" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-788479
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-788479: (4.670351898s)
--- PASS: TestScheduledStopUnix (99.46s)

                                                
                                    
x
+
TestInsufficientStorage (12.43s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-743798 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
E1124 13:43:48.922496  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-743798 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.90232456s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5712f88e-8449-45b3-970c-e719ce279ec5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-743798] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c9227bd-8359-4cca-9cc5-55d5af9af9d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21932"}}
	{"specversion":"1.0","id":"4afc822d-786b-4e46-8b7d-587e137e9ba2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bcd81c05-6cc5-4b01-b3fc-8799719d049d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21932-370498/kubeconfig"}}
	{"specversion":"1.0","id":"9cf6f545-54b0-4099-aed6-b319be38473f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-370498/.minikube"}}
	{"specversion":"1.0","id":"b1303438-a064-44c4-ac93-d862fa5d1b4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"24fb2b41-d2b1-433d-a21e-b7819e9aca39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a25ab262-f004-4113-90d4-7eea7eb03429","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d7cea305-5d69-4ef1-82c7-7c161f00cdc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"645c925b-ba1a-4845-8eff-a562ead38c30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"da251012-7a3c-4640-88b0-34ed47722fc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e01958f2-2aed-49be-862e-e4140b0b69da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-743798\" primary control-plane node in \"insufficient-storage-743798\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"25a7faa5-7fb4-4092-85e5-033e5528848c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f7f444c6-1695-44c5-ab54-d6c84148bfa8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ae935f41-341a-420b-9bd3-fd2b3504ba99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-743798 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-743798 --output=json --layout=cluster: exit status 7 (309.712365ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-743798","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-743798","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 13:43:57.631890  553586 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-743798" does not appear in /home/jenkins/minikube-integration/21932-370498/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-743798 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-743798 --output=json --layout=cluster: exit status 7 (308.097402ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-743798","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-743798","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 13:43:57.940665  553695 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-743798" does not appear in /home/jenkins/minikube-integration/21932-370498/kubeconfig
	E1124 13:43:57.951567  553695 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/insufficient-storage-743798/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-743798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-743798
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-743798: (1.909894439s)
--- PASS: TestInsufficientStorage (12.43s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (64.15s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2866315139 start -p running-upgrade-648987 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2866315139 start -p running-upgrade-648987 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (37.245358245s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-648987 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-648987 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (21.693785685s)
helpers_test.go:175: Cleaning up "running-upgrade-648987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-648987
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-648987: (2.500895055s)
--- PASS: TestRunningBinaryUpgrade (64.15s)

                                                
                                    
x
+
TestKubernetesUpgrade (344.66s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-358357 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-358357 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (35.005414388s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-358357
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-358357: (1.350558013s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-358357 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-358357 status --format={{.Host}}: exit status 7 (131.737016ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-358357 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-358357 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m42.637226089s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-358357 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-358357 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-358357 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (96.274608ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-358357] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-370498/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-370498/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-358357
	    minikube start -p kubernetes-upgrade-358357 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3583572 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-358357 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-358357 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-358357 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (22.614489877s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-358357" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-358357
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-358357: (2.755860421s)
--- PASS: TestKubernetesUpgrade (344.66s)

                                                
                                    
x
+
TestMissingContainerUpgrade (140.31s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.72483808 start -p missing-upgrade-133621 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.72483808 start -p missing-upgrade-133621 --memory=3072 --driver=docker  --container-runtime=containerd: (1m30.204638371s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-133621
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-133621
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-133621 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-133621 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (44.160761097s)
helpers_test.go:175: Cleaning up "missing-upgrade-133621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-133621
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-133621: (2.492203816s)
--- PASS: TestMissingContainerUpgrade (140.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.76s)

                                                
                                    
x
+
TestPause/serial/Start (45.95s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-121251 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-121251 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (45.945835697s)
--- PASS: TestPause/serial/Start (45.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (120.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.506167276 start -p stopped-upgrade-178664 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.506167276 start -p stopped-upgrade-178664 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (1m30.694887268s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.506167276 -p stopped-upgrade-178664 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.506167276 -p stopped-upgrade-178664 stop: (1.754886864s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-178664 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-178664 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (27.586500493s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (120.04s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.48s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-121251 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-121251 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.464368264s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.48s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-121251 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-121251 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-121251 --output=json --layout=cluster: exit status 2 (402.119432ms)

                                                
                                                
-- stdout --
	{"Name":"pause-121251","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-121251","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.41s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-121251 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-121251 --alsologtostderr -v=5: (1.405935649s)
--- PASS: TestPause/serial/Unpause (1.41s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.34s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-121251 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-121251 --alsologtostderr -v=5: (1.338538963s)
--- PASS: TestPause/serial/PauseAgain (1.34s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.28s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-121251 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-121251 --alsologtostderr -v=5: (3.281185267s)
--- PASS: TestPause/serial/DeletePaused (3.28s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-121251
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-121251: exit status 1 (21.218115ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-121251: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-178664
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-178664: (1.287241503s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-787855 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-787855 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (102.412715ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-787855] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-370498/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-370498/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (27.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-787855 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1124 13:46:16.186975  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-787855 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (26.861112939s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-787855 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (27.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-355661 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-355661 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (190.704893ms)

                                                
                                                
-- stdout --
	* [false-355661] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-370498/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-370498/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:46:24.415745  588817 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:46:24.416063  588817 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:46:24.416075  588817 out.go:374] Setting ErrFile to fd 2...
	I1124 13:46:24.416082  588817 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:46:24.416356  588817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-370498/.minikube/bin
	I1124 13:46:24.416859  588817 out.go:368] Setting JSON to false
	I1124 13:46:24.418120  588817 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8923,"bootTime":1763983061,"procs":345,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:46:24.418193  588817 start.go:143] virtualization: kvm guest
	I1124 13:46:24.420371  588817 out.go:179] * [false-355661] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:46:24.422107  588817 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:46:24.422134  588817 notify.go:221] Checking for updates...
	I1124 13:46:24.424720  588817 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:46:24.426078  588817 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-370498/kubeconfig
	I1124 13:46:24.427245  588817 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-370498/.minikube
	I1124 13:46:24.428477  588817 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:46:24.429668  588817 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:46:24.431386  588817 config.go:182] Loaded profile config "NoKubernetes-787855": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:46:24.431485  588817 config.go:182] Loaded profile config "force-systemd-env-875063": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:46:24.431566  588817 config.go:182] Loaded profile config "kubernetes-upgrade-358357": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:46:24.431674  588817 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:46:24.459872  588817 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:46:24.460043  588817 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:46:24.528380  588817 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 13:46:24.516401835 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:46:24.528524  588817 docker.go:319] overlay module found
	I1124 13:46:24.531122  588817 out.go:179] * Using the docker driver based on user configuration
	I1124 13:46:24.532548  588817 start.go:309] selected driver: docker
	I1124 13:46:24.532570  588817 start.go:927] validating driver "docker" against <nil>
	I1124 13:46:24.532587  588817 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:46:24.534773  588817 out.go:203] 
	W1124 13:46:24.536088  588817 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1124 13:46:24.537283  588817 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-355661 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-355661

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-355661

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-355661

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-355661

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-355661

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-355661

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-355661

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-355661

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-355661

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-355661

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-355661

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-355661" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-355661" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 13:45:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-358357
contexts:
- context:
cluster: kubernetes-upgrade-358357
user: kubernetes-upgrade-358357
name: kubernetes-upgrade-358357
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-358357
user:
client-certificate: /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/kubernetes-upgrade-358357/client.crt
client-key: /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/kubernetes-upgrade-358357/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-355661

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355661"

                                                
                                                
----------------------- debugLogs end: false-355661 [took: 4.275270789s] --------------------------------
helpers_test.go:175: Cleaning up "false-355661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-355661
--- PASS: TestNetworkPlugins/group/false (4.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-787855 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-787855 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (21.416709472s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-787855 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-787855 status -o json: exit status 2 (332.637654ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-787855","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-787855
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-787855: (2.146371694s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-787855 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-787855 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.136245457s)
--- PASS: TestNoKubernetes/serial/Start (7.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21932-370498/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-787855 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-787855 "sudo systemctl is-active --quiet service kubelet": exit status 1 (302.512889ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (15.268162352s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (16.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-787855
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-787855: (1.334467363s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-787855 --driver=docker  --container-runtime=containerd
E1124 13:47:25.850106  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-787855 --driver=docker  --container-runtime=containerd: (8.782957568s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (51.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-513442 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-513442 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (51.919391049s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (51.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-787855 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-787855 "sudo systemctl is-active --quiet service kubelet": exit status 1 (316.361648ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (52.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-608395 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-608395 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (52.572755404s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (52.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-513442 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-513442 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-513442 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-513442 --alsologtostderr -v=3: (12.167456866s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-608395 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-608395 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-608395 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-608395 --alsologtostderr -v=3: (12.115591035s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-513442 -n old-k8s-version-513442
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-513442 -n old-k8s-version-513442: exit status 7 (88.290696ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-513442 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-513442 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-513442 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (44.006477667s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-513442 -n old-k8s-version-513442
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-608395 -n no-preload-608395
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-608395 -n no-preload-608395: exit status 7 (91.034453ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-608395 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (47.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-608395 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-608395 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (47.431387966s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-608395 -n no-preload-608395
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (47.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-696db" [48c51557-2b33-4c12-ad44-5a9ed3ce9b31] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003573244s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-696db" [48c51557-2b33-4c12-ad44-5a9ed3ce9b31] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003716777s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-513442 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-frmwx" [aaec62a5-9058-4add-89cc-c6847c46c4a9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004401695s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-513442 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-513442 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-513442 -n old-k8s-version-513442
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-513442 -n old-k8s-version-513442: exit status 2 (341.079718ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-513442 -n old-k8s-version-513442
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-513442 -n old-k8s-version-513442: exit status 2 (329.642696ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-513442 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-513442 -n old-k8s-version-513442
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-513442 -n old-k8s-version-513442
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-frmwx" [aaec62a5-9058-4add-89cc-c6847c46c4a9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003589087s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-608395 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (47.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-971503 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-971503 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (47.148630887s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (47.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-608395 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-608395 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-608395 -n no-preload-608395
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-608395 -n no-preload-608395: exit status 2 (431.01308ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-608395 -n no-preload-608395
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-608395 -n no-preload-608395: exit status 2 (425.953133ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-608395 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-608395 --alsologtostderr -v=1: (1.039714983s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-608395 -n no-preload-608395
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-608395 -n no-preload-608395
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-403602 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-403602 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (51.146196632s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-846862 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-846862 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (35.70076395s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (45.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-355661 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-355661 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (45.272127717s)
--- PASS: TestNetworkPlugins/group/auto/Start (45.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-846862 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-846862 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.602473638s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-846862 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-846862 --alsologtostderr -v=3: (1.454007578s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-846862 -n newest-cni-846862
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-846862 -n newest-cni-846862: exit status 7 (86.944157ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-846862 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-846862 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-846862 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (10.92113135s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-846862 -n newest-cni-846862
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-846862 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-971503 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-971503 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.082625339s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-971503 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-846862 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-846862 -n newest-cni-846862
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-846862 -n newest-cni-846862: exit status 2 (465.403876ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-846862 -n newest-cni-846862
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-846862 -n newest-cni-846862: exit status 2 (462.483722ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-846862 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-846862 -n newest-cni-846862
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-846862 -n newest-cni-846862
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-971503 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-971503 --alsologtostderr -v=3: (12.339962825s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-355661 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-355661 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (42.808247339s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-971503 -n embed-certs-971503
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-971503 -n embed-certs-971503: exit status 7 (96.178114ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-971503 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-971503 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-971503 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (53.070920427s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-971503 -n embed-certs-971503
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-403602 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-403602 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-403602 --alsologtostderr -v=3
E1124 13:51:16.186250  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/addons-093377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-403602 --alsologtostderr -v=3: (12.115113199s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-355661 "pgrep -a kubelet"
I1124 13:51:19.876130  374122 config.go:182] Loaded profile config "auto-355661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-355661 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8kxsb" [d57c454f-674d-499c-be7c-a1ff448acc0b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8kxsb" [d57c454f-674d-499c-be7c-a1ff448acc0b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.010148652s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-403602 -n default-k8s-diff-port-403602
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-403602 -n default-k8s-diff-port-403602: exit status 7 (107.93112ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-403602 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-403602 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-403602 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (49.887970238s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-403602 -n default-k8s-diff-port-403602
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-355661 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-355661 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-355661 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-6rnb2" [0747b9f6-a38a-410e-ac96-b8c0055bfe7f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004483548s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-355661 "pgrep -a kubelet"
I1124 13:51:48.642164  374122 config.go:182] Loaded profile config "kindnet-355661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-355661 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8x8zj" [5dddc262-abf1-4d96-86c4-41b101ed8800] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8x8zj" [5dddc262-abf1-4d96-86c4-41b101ed8800] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.003928023s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (59.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-355661 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-355661 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (59.226976453s)
--- PASS: TestNetworkPlugins/group/calico/Start (59.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-355661 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-355661 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-355661 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5lxxf" [1f1904ca-eeaf-4460-9562-5b91ef6dccfd] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005559953s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5lxxf" [1f1904ca-eeaf-4460-9562-5b91ef6dccfd] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003758791s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-971503 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-t6vlk" [9b39e797-c8f6-4e47-88bd-67a958bb3e1e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004812827s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-971503 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-971503 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-971503 -n embed-certs-971503
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-971503 -n embed-certs-971503: exit status 2 (393.733022ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-971503 -n embed-certs-971503
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-971503 -n embed-certs-971503: exit status 2 (411.923515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-971503 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-971503 -n embed-certs-971503
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-971503 -n embed-certs-971503
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-t6vlk" [9b39e797-c8f6-4e47-88bd-67a958bb3e1e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003883004s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-403602 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (56.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-355661 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-355661 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (56.553803709s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (56.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (42.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-355661 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-355661 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (42.320051789s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (42.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-403602 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-403602 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-403602 --alsologtostderr -v=1: (1.398370753s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-403602 -n default-k8s-diff-port-403602
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-403602 -n default-k8s-diff-port-403602: exit status 2 (371.801751ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-403602 -n default-k8s-diff-port-403602
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-403602 -n default-k8s-diff-port-403602: exit status 2 (361.849412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-403602 --alsologtostderr -v=1
E1124 13:52:25.849455  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/functional-420317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-403602 --alsologtostderr -v=1: (1.441029174s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-403602 -n default-k8s-diff-port-403602
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-403602 -n default-k8s-diff-port-403602
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-355661 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-355661 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (55.511815418s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-msdpb" [d930fd09-bb9d-47b7-ac8b-fed3673adde4] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-msdpb" [d930fd09-bb9d-47b7-ac8b-fed3673adde4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004962962s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-355661 "pgrep -a kubelet"
I1124 13:52:55.497906  374122 config.go:182] Loaded profile config "calico-355661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-355661 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-k8n7x" [d8f15cff-7d53-4f99-9bac-a9c3723b8ce9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-k8n7x" [d8f15cff-7d53-4f99-9bac-a9c3723b8ce9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.005036458s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-355661 "pgrep -a kubelet"
I1124 13:53:01.827649  374122 config.go:182] Loaded profile config "enable-default-cni-355661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-355661 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xcf5m" [a1e53e48-10d6-4043-8221-193d8ec0dc5b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xcf5m" [a1e53e48-10d6-4043-8221-193d8ec0dc5b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003718236s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-355661 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-355661 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-355661 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-355661 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-355661 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-355661 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-355661 "pgrep -a kubelet"
I1124 13:53:15.929454  374122 config.go:182] Loaded profile config "custom-flannel-355661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-355661 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6zzdt" [7f3172fa-4c32-4277-b676-586d0ed9a8b5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6zzdt" [7f3172fa-4c32-4277-b676-586d0ed9a8b5] Running
E1124 13:53:22.349332  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:53:22.356024  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:53:22.367500  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:53:22.389079  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:53:22.430603  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:53:22.512062  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:53:22.673894  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004536299s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-355661 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-355661 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-355661 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (70.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-355661 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-355661 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m10.764224926s)
--- PASS: TestNetworkPlugins/group/bridge/Start (70.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-x5r9s" [27053a42-e0f1-4c22-abaf-15f432711901] Running
E1124 13:53:27.481148  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:53:27.703790  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:53:27.710237  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:53:27.721647  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:53:27.743196  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:53:27.784669  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:53:27.866122  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:53:28.027754  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:53:28.349880  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:53:28.991666  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004534918s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-355661 "pgrep -a kubelet"
E1124 13:53:32.602516  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1124 13:53:32.628565  374122 config.go:182] Loaded profile config "flannel-355661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-355661 replace --force -f testdata/netcat-deployment.yaml
E1124 13:53:32.834617  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tknqv" [3321d675-bd46-4892-9840-4e0954b40f2b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tknqv" [3321d675-bd46-4892-9840-4e0954b40f2b] Running
E1124 13:53:37.956426  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/no-preload-608395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004912228s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-355661 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-355661 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-355661 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-355661 "pgrep -a kubelet"
I1124 13:54:36.765468  374122 config.go:182] Loaded profile config "bridge-355661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-355661 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-724rg" [527c225e-b49a-4213-80f6-3d20420150d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-724rg" [527c225e-b49a-4213-80f6-3d20420150d3] Running
E1124 13:54:44.288298  374122 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/old-k8s-version-513442/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004060311s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-355661 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-355661 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-355661 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (26/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-312087" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-312087
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-355661 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-355661

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-355661

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-355661

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-355661

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-355661

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-355661

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-355661

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-355661

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-355661

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-355661

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-355661

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-355661" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-355661" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 13:45:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-358357
contexts:
- context:
cluster: kubernetes-upgrade-358357
user: kubernetes-upgrade-358357
name: kubernetes-upgrade-358357
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-358357
user:
client-certificate: /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/kubernetes-upgrade-358357/client.crt
client-key: /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/kubernetes-upgrade-358357/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-355661

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355661"

                                                
                                                
----------------------- debugLogs end: kubenet-355661 [took: 4.007310024s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-355661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-355661
--- SKIP: TestNetworkPlugins/group/kubenet (4.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-355661 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-355661

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-355661

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-355661

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-355661

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-355661

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-355661

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-355661

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-355661

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-355661

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-355661

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-355661

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-355661" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-355661

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-355661

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-355661

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-355661

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-355661" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-355661" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21932-370498/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 13:45:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-358357
contexts:
- context:
cluster: kubernetes-upgrade-358357
user: kubernetes-upgrade-358357
name: kubernetes-upgrade-358357
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-358357
user:
client-certificate: /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/kubernetes-upgrade-358357/client.crt
client-key: /home/jenkins/minikube-integration/21932-370498/.minikube/profiles/kubernetes-upgrade-358357/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-355661

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-355661" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355661"

                                                
                                                
----------------------- debugLogs end: cilium-355661 [took: 4.607927749s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-355661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-355661
--- SKIP: TestNetworkPlugins/group/cilium (4.81s)

                                                
                                    
Copied to clipboard