Test Report: Docker_Linux_containerd 20535

                    
                      f30cb3cfe346a634e035681bc4eff951ae572c17:2025-03-17:38751
                    
                

Test fail (6/312)

Order failed test Duration
288 TestPause/serial/Start 593.07
291 TestNetworkPlugins/group/kindnet/Start 1147.48
292 TestNetworkPlugins/group/calico/Start 1621.08
319 TestStartStop/group/old-k8s-version/serial/FirstStart 646.49
326 TestStartStop/group/no-preload/serial/FirstStart 609.71
328 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 571.67
x
+
TestPause/serial/Start (593.07s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-507725 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p pause-507725 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: exit status 80 (9m51.162586746s)

                                                
                                                
-- stdout --
	* [pause-507725] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20535
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "pause-507725" primary control-plane node in "pause-507725" cluster
	* Pulling base image v0.0.46-1741860993-20523 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0317 10:59:37.531758  245681 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-7h92s" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-7h92s" not found
	E0317 11:03:37.537353  245681 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-linux-amd64 start -p pause-507725 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-507725
helpers_test.go:235: (dbg) docker inspect pause-507725:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ec91abc09f598f912ed7d2f18a04156899b159ba821f40e7236ffaa2d0b6a98",
	        "Created": "2025-03-17T10:59:16.245373249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246739,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-03-17T10:59:16.277769413Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/1ec91abc09f598f912ed7d2f18a04156899b159ba821f40e7236ffaa2d0b6a98/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ec91abc09f598f912ed7d2f18a04156899b159ba821f40e7236ffaa2d0b6a98/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ec91abc09f598f912ed7d2f18a04156899b159ba821f40e7236ffaa2d0b6a98/hosts",
	        "LogPath": "/var/lib/docker/containers/1ec91abc09f598f912ed7d2f18a04156899b159ba821f40e7236ffaa2d0b6a98/1ec91abc09f598f912ed7d2f18a04156899b159ba821f40e7236ffaa2d0b6a98-json.log",
	        "Name": "/pause-507725",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-507725:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-507725",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1ec91abc09f598f912ed7d2f18a04156899b159ba821f40e7236ffaa2d0b6a98",
	                "LowerDir": "/var/lib/docker/overlay2/1fd5adf33cc8e12ca69da7d6b9e0be4e2bfed7ed52ec6638dc21a161e2e4e6bd-init/diff:/var/lib/docker/overlay2/c513cb32e4b42c4b2e1258d7197e5cd39dcbb3306943490e9747416948e6aaf6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1fd5adf33cc8e12ca69da7d6b9e0be4e2bfed7ed52ec6638dc21a161e2e4e6bd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1fd5adf33cc8e12ca69da7d6b9e0be4e2bfed7ed52ec6638dc21a161e2e4e6bd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1fd5adf33cc8e12ca69da7d6b9e0be4e2bfed7ed52ec6638dc21a161e2e4e6bd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-507725",
	                "Source": "/var/lib/docker/volumes/pause-507725/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-507725",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-507725",
	                "name.minikube.sigs.k8s.io": "pause-507725",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eca5c5922aba6b69430c0e74806f2f880eab4aac4913a892d86b9b1e948a4045",
	            "SandboxKey": "/var/run/docker/netns/eca5c5922aba",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-507725": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:28:05:3d:9a:5b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7305c82bb37b2a024025f05e887ad87dca42b0a81244e064bd8ebd79b0338eef",
	                    "EndpointID": "0d5ff564a65b39555ce3272ccb52e242ce246206d9901e420d4506dbf3ae438d",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-507725",
	                        "1ec91abc09f5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-507725 -n pause-507725
helpers_test.go:244: <<< TestPause/serial/Start FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/Start]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-507725 logs -n 25
helpers_test.go:252: TestPause/serial/Start logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-038579          | kubernetes-upgrade-038579 | jenkins | v1.35.0 | 17 Mar 25 10:57 UTC | 17 Mar 25 10:57 UTC |
	| start   | -p kubernetes-upgrade-038579          | kubernetes-upgrade-038579 | jenkins | v1.35.0 | 17 Mar 25 10:57 UTC | 17 Mar 25 11:02 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=containerd        |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-397855             | missing-upgrade-397855    | jenkins | v1.35.0 | 17 Mar 25 10:57 UTC | 17 Mar 25 10:58 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=containerd        |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-443193             | running-upgrade-443193    | jenkins | v1.35.0 | 17 Mar 25 10:58 UTC | 17 Mar 25 10:58 UTC |
	| start   | -p force-systemd-flag-408852          | force-systemd-flag-408852 | jenkins | v1.35.0 | 17 Mar 25 10:58 UTC | 17 Mar 25 10:58 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=containerd        |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-397855             | missing-upgrade-397855    | jenkins | v1.35.0 | 17 Mar 25 10:58 UTC | 17 Mar 25 10:58 UTC |
	| ssh     | force-systemd-flag-408852             | force-systemd-flag-408852 | jenkins | v1.35.0 | 17 Mar 25 10:58 UTC | 17 Mar 25 10:58 UTC |
	|         | ssh cat                               |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml           |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-408852          | force-systemd-flag-408852 | jenkins | v1.35.0 | 17 Mar 25 10:58 UTC | 17 Mar 25 10:58 UTC |
	| start   | -p cert-options-442523                | cert-options-442523       | jenkins | v1.35.0 | 17 Mar 25 10:58 UTC | 17 Mar 25 10:59 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=containerd        |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-873690             | minikube                  | jenkins | v1.26.0 | 17 Mar 25 10:58 UTC | 17 Mar 25 10:59 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=docker                    |                           |         |         |                     |                     |
	|         | --container-runtime=containerd        |                           |         |         |                     |                     |
	| ssh     | cert-options-442523 ssh               | cert-options-442523       | jenkins | v1.35.0 | 17 Mar 25 10:59 UTC | 17 Mar 25 10:59 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-442523 -- sudo        | cert-options-442523       | jenkins | v1.35.0 | 17 Mar 25 10:59 UTC | 17 Mar 25 10:59 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-442523                | cert-options-442523       | jenkins | v1.35.0 | 17 Mar 25 10:59 UTC | 17 Mar 25 10:59 UTC |
	| start   | -p pause-507725 --memory=2048         | pause-507725              | jenkins | v1.35.0 | 17 Mar 25 10:59 UTC |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker            |                           |         |         |                     |                     |
	|         | --container-runtime=containerd        |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-873690 stop           | minikube                  | jenkins | v1.26.0 | 17 Mar 25 10:59 UTC | 17 Mar 25 10:59 UTC |
	| start   | -p stopped-upgrade-873690             | stopped-upgrade-873690    | jenkins | v1.35.0 | 17 Mar 25 10:59 UTC | 17 Mar 25 11:00 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=containerd        |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-873690             | stopped-upgrade-873690    | jenkins | v1.35.0 | 17 Mar 25 11:00 UTC | 17 Mar 25 11:00 UTC |
	| start   | -p auto-236437 --memory=3072          | auto-236437               | jenkins | v1.35.0 | 17 Mar 25 11:00 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=containerd        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-196744             | cert-expiration-196744    | jenkins | v1.35.0 | 17 Mar 25 11:00 UTC | 17 Mar 25 11:00 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=containerd        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-196744             | cert-expiration-196744    | jenkins | v1.35.0 | 17 Mar 25 11:00 UTC | 17 Mar 25 11:00 UTC |
	| start   | -p kindnet-236437                     | kindnet-236437            | jenkins | v1.35.0 | 17 Mar 25 11:00 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker         |                           |         |         |                     |                     |
	|         | --container-runtime=containerd        |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-038579          | kubernetes-upgrade-038579 | jenkins | v1.35.0 | 17 Mar 25 11:02 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=containerd        |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-038579          | kubernetes-upgrade-038579 | jenkins | v1.35.0 | 17 Mar 25 11:02 UTC | 17 Mar 25 11:02 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=containerd        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-038579          | kubernetes-upgrade-038579 | jenkins | v1.35.0 | 17 Mar 25 11:02 UTC | 17 Mar 25 11:02 UTC |
	| start   | -p calico-236437 --memory=3072        | calico-236437             | jenkins | v1.35.0 | 17 Mar 25 11:02 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=calico --driver=docker          |                           |         |         |                     |                     |
	|         | --container-runtime=containerd        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 11:02:24
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 11:02:24.880858  271403 out.go:345] Setting OutFile to fd 1 ...
	I0317 11:02:24.881135  271403 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:02:24.881147  271403 out.go:358] Setting ErrFile to fd 2...
	I0317 11:02:24.881151  271403 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:02:24.881334  271403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
	I0317 11:02:24.882486  271403 out.go:352] Setting JSON to false
	I0317 11:02:24.884073  271403 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2638,"bootTime":1742206707,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 11:02:24.884163  271403 start.go:139] virtualization: kvm guest
	I0317 11:02:24.885681  271403 out.go:177] * [calico-236437] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 11:02:24.887539  271403 out.go:177]   - MINIKUBE_LOCATION=20535
	I0317 11:02:24.887565  271403 notify.go:220] Checking for updates...
	I0317 11:02:24.889529  271403 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 11:02:24.890553  271403 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 11:02:24.891476  271403 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	I0317 11:02:24.892387  271403 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 11:02:24.893262  271403 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 11:02:24.894457  271403 config.go:182] Loaded profile config "auto-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:02:24.894580  271403 config.go:182] Loaded profile config "kindnet-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:02:24.894677  271403 config.go:182] Loaded profile config "pause-507725": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:02:24.894762  271403 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 11:02:24.918017  271403 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 11:02:24.918114  271403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 11:02:24.969860  271403 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-03-17 11:02:24.960688592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 11:02:24.969970  271403 docker.go:318] overlay module found
	I0317 11:02:24.971694  271403 out.go:177] * Using the docker driver based on user configuration
	I0317 11:02:24.972796  271403 start.go:297] selected driver: docker
	I0317 11:02:24.972809  271403 start.go:901] validating driver "docker" against <nil>
	I0317 11:02:24.972827  271403 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 11:02:24.973657  271403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 11:02:25.022032  271403 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-03-17 11:02:25.012636564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 11:02:25.022160  271403 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 11:02:25.022392  271403 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 11:02:25.023911  271403 out.go:177] * Using Docker driver with root privileges
	I0317 11:02:25.024881  271403 cni.go:84] Creating CNI manager for "calico"
	I0317 11:02:25.024899  271403 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0317 11:02:25.024977  271403 start.go:340] cluster config:
	{Name:calico-236437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:calico-236437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:02:25.026106  271403 out.go:177] * Starting "calico-236437" primary control-plane node in "calico-236437" cluster
	I0317 11:02:25.027136  271403 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0317 11:02:25.028276  271403 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0317 11:02:25.029237  271403 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:02:25.029286  271403 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0317 11:02:25.029305  271403 cache.go:56] Caching tarball of preloaded images
	I0317 11:02:25.029318  271403 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0317 11:02:25.029388  271403 preload.go:172] Found /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 11:02:25.029403  271403 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0317 11:02:25.029535  271403 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/config.json ...
	I0317 11:02:25.029562  271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/config.json: {Name:mka28e5f5151a7bb8665b9fadb1eddd447540b75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:02:25.050614  271403 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0317 11:02:25.050633  271403 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0317 11:02:25.050647  271403 cache.go:230] Successfully downloaded all kic artifacts
	I0317 11:02:25.050674  271403 start.go:360] acquireMachinesLock for calico-236437: {Name:mka22ede0df163978b69124089e295c5c09c2417 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 11:02:25.050757  271403 start.go:364] duration metric: took 70.02µs to acquireMachinesLock for "calico-236437"
	I0317 11:02:25.050781  271403 start.go:93] Provisioning new machine with config: &{Name:calico-236437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:calico-236437 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 11:02:25.050872  271403 start.go:125] createHost starting for "" (driver="docker")
	I0317 11:02:23.037623  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:25.037658  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:23.149814  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:25.650135  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:24.534023  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:26.534079  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:29.034382  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:25.052899  271403 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0317 11:02:25.053169  271403 start.go:159] libmachine.API.Create for "calico-236437" (driver="docker")
	I0317 11:02:25.053195  271403 client.go:168] LocalClient.Create starting
	I0317 11:02:25.053249  271403 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem
	I0317 11:02:25.053279  271403 main.go:141] libmachine: Decoding PEM data...
	I0317 11:02:25.053293  271403 main.go:141] libmachine: Parsing certificate...
	I0317 11:02:25.053336  271403 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem
	I0317 11:02:25.053354  271403 main.go:141] libmachine: Decoding PEM data...
	I0317 11:02:25.053364  271403 main.go:141] libmachine: Parsing certificate...
	I0317 11:02:25.053671  271403 cli_runner.go:164] Run: docker network inspect calico-236437 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0317 11:02:25.069801  271403 cli_runner.go:211] docker network inspect calico-236437 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0317 11:02:25.069854  271403 network_create.go:284] running [docker network inspect calico-236437] to gather additional debugging logs...
	I0317 11:02:25.069871  271403 cli_runner.go:164] Run: docker network inspect calico-236437
	W0317 11:02:25.086515  271403 cli_runner.go:211] docker network inspect calico-236437 returned with exit code 1
	I0317 11:02:25.086545  271403 network_create.go:287] error running [docker network inspect calico-236437]: docker network inspect calico-236437: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-236437 not found
	I0317 11:02:25.086566  271403 network_create.go:289] output of [docker network inspect calico-236437]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-236437 not found
	
	** /stderr **
	I0317 11:02:25.086714  271403 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 11:02:25.103494  271403 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6a2ef9d4bc68 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9a:4d:91:26:57:2c} reservation:<nil>}
	I0317 11:02:25.104219  271403 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-00bf62ef0133 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:c5:34:86:d6:21} reservation:<nil>}
	I0317 11:02:25.104910  271403 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-81e0001ceae7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:6a:cf:1c:79:e6} reservation:<nil>}
	I0317 11:02:25.105515  271403 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-16edb2a113e3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d6:59:06:a9:a8:e8} reservation:<nil>}
	I0317 11:02:25.106325  271403 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d7f060}
	I0317 11:02:25.106346  271403 network_create.go:124] attempt to create docker network calico-236437 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0317 11:02:25.106383  271403 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-236437 calico-236437
	I0317 11:02:25.157870  271403 network_create.go:108] docker network calico-236437 192.168.85.0/24 created
	I0317 11:02:25.157905  271403 kic.go:121] calculated static IP "192.168.85.2" for the "calico-236437" container
	I0317 11:02:25.157997  271403 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0317 11:02:25.175038  271403 cli_runner.go:164] Run: docker volume create calico-236437 --label name.minikube.sigs.k8s.io=calico-236437 --label created_by.minikube.sigs.k8s.io=true
	I0317 11:02:25.193023  271403 oci.go:103] Successfully created a docker volume calico-236437
	I0317 11:02:25.193103  271403 cli_runner.go:164] Run: docker run --rm --name calico-236437-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-236437 --entrypoint /usr/bin/test -v calico-236437:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0317 11:02:25.607335  271403 oci.go:107] Successfully prepared a docker volume calico-236437
	I0317 11:02:25.607382  271403 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:02:25.607404  271403 kic.go:194] Starting extracting preloaded images to volume ...
	I0317 11:02:25.607460  271403 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-236437:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0317 11:02:27.537536  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:30.036900  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:28.149376  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:30.649199  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:30.089006  271403 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-236437:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.481483792s)
	I0317 11:02:30.089037  271403 kic.go:203] duration metric: took 4.481630761s to extract preloaded images to volume ...
	W0317 11:02:30.089153  271403 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0317 11:02:30.089236  271403 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0317 11:02:30.143191  271403 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-236437 --name calico-236437 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-236437 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-236437 --network calico-236437 --ip 192.168.85.2 --volume calico-236437:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0317 11:02:30.402985  271403 cli_runner.go:164] Run: docker container inspect calico-236437 --format={{.State.Running}}
	I0317 11:02:30.421737  271403 cli_runner.go:164] Run: docker container inspect calico-236437 --format={{.State.Status}}
	I0317 11:02:30.443380  271403 cli_runner.go:164] Run: docker exec calico-236437 stat /var/lib/dpkg/alternatives/iptables
	I0317 11:02:30.487803  271403 oci.go:144] the created container "calico-236437" has a running status.
	I0317 11:02:30.487842  271403 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa...
	I0317 11:02:30.966099  271403 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0317 11:02:30.989095  271403 cli_runner.go:164] Run: docker container inspect calico-236437 --format={{.State.Status}}
	I0317 11:02:31.006629  271403 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0317 11:02:31.006654  271403 kic_runner.go:114] Args: [docker exec --privileged calico-236437 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0317 11:02:31.052822  271403 cli_runner.go:164] Run: docker container inspect calico-236437 --format={{.State.Status}}
	I0317 11:02:31.073514  271403 machine.go:93] provisionDockerMachine start ...
	I0317 11:02:31.073608  271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
	I0317 11:02:31.091435  271403 main.go:141] libmachine: Using SSH client type: native
	I0317 11:02:31.091672  271403 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I0317 11:02:31.091683  271403 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 11:02:31.230753  271403 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-236437
	
	I0317 11:02:31.230782  271403 ubuntu.go:169] provisioning hostname "calico-236437"
	I0317 11:02:31.230855  271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
	I0317 11:02:31.248577  271403 main.go:141] libmachine: Using SSH client type: native
	I0317 11:02:31.248869  271403 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I0317 11:02:31.248892  271403 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-236437 && echo "calico-236437" | sudo tee /etc/hostname
	I0317 11:02:31.389908  271403 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-236437
	
	I0317 11:02:31.390001  271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
	I0317 11:02:31.407223  271403 main.go:141] libmachine: Using SSH client type: native
	I0317 11:02:31.407517  271403 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I0317 11:02:31.407545  271403 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-236437' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-236437/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-236437' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 11:02:31.543474  271403 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 11:02:31.543500  271403 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20535-4918/.minikube CaCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20535-4918/.minikube}
	I0317 11:02:31.543521  271403 ubuntu.go:177] setting up certificates
	I0317 11:02:31.543534  271403 provision.go:84] configureAuth start
	I0317 11:02:31.543589  271403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-236437
	I0317 11:02:31.561231  271403 provision.go:143] copyHostCerts
	I0317 11:02:31.561284  271403 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem, removing ...
	I0317 11:02:31.561292  271403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem
	I0317 11:02:31.561354  271403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem (1082 bytes)
	I0317 11:02:31.561446  271403 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem, removing ...
	I0317 11:02:31.561454  271403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem
	I0317 11:02:31.561478  271403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem (1123 bytes)
	I0317 11:02:31.561530  271403 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem, removing ...
	I0317 11:02:31.561537  271403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem
	I0317 11:02:31.561562  271403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem (1679 bytes)
	I0317 11:02:31.561607  271403 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem org=jenkins.calico-236437 san=[127.0.0.1 192.168.85.2 calico-236437 localhost minikube]
	I0317 11:02:31.992225  271403 provision.go:177] copyRemoteCerts
	I0317 11:02:31.992284  271403 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 11:02:31.992319  271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
	I0317 11:02:32.009677  271403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa Username:docker}
	I0317 11:02:32.104042  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 11:02:32.126981  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0317 11:02:32.149635  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0317 11:02:32.172473  271403 provision.go:87] duration metric: took 628.925048ms to configureAuth
	I0317 11:02:32.172509  271403 ubuntu.go:193] setting minikube options for container-runtime
	I0317 11:02:32.172673  271403 config.go:182] Loaded profile config "calico-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:02:32.172685  271403 machine.go:96] duration metric: took 1.099153553s to provisionDockerMachine
	I0317 11:02:32.172692  271403 client.go:171] duration metric: took 7.119491835s to LocalClient.Create
	I0317 11:02:32.172711  271403 start.go:167] duration metric: took 7.119541902s to libmachine.API.Create "calico-236437"
	I0317 11:02:32.172723  271403 start.go:293] postStartSetup for "calico-236437" (driver="docker")
	I0317 11:02:32.172734  271403 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 11:02:32.172782  271403 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 11:02:32.172832  271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
	I0317 11:02:32.189861  271403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa Username:docker}
	I0317 11:02:32.284036  271403 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 11:02:32.287202  271403 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0317 11:02:32.287240  271403 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0317 11:02:32.287285  271403 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0317 11:02:32.287295  271403 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0317 11:02:32.287311  271403 filesync.go:126] Scanning /home/jenkins/minikube-integration/20535-4918/.minikube/addons for local assets ...
	I0317 11:02:32.287361  271403 filesync.go:126] Scanning /home/jenkins/minikube-integration/20535-4918/.minikube/files for local assets ...
	I0317 11:02:32.287433  271403 filesync.go:149] local asset: /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem -> 116902.pem in /etc/ssl/certs
	I0317 11:02:32.287518  271403 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 11:02:32.295619  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem --> /etc/ssl/certs/116902.pem (1708 bytes)
	I0317 11:02:32.317674  271403 start.go:296] duration metric: took 144.936846ms for postStartSetup
	I0317 11:02:32.318040  271403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-236437
	I0317 11:02:32.335236  271403 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/config.json ...
	I0317 11:02:32.335512  271403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 11:02:32.335547  271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
	I0317 11:02:32.351723  271403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa Username:docker}
	I0317 11:02:32.444147  271403 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0317 11:02:32.448601  271403 start.go:128] duration metric: took 7.397705312s to createHost
	I0317 11:02:32.448627  271403 start.go:83] releasing machines lock for "calico-236437", held for 7.39785815s
	I0317 11:02:32.448708  271403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-236437
	I0317 11:02:32.467676  271403 ssh_runner.go:195] Run: cat /version.json
	I0317 11:02:32.467727  271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
	I0317 11:02:32.467758  271403 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 11:02:32.467811  271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
	I0317 11:02:32.485718  271403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa Username:docker}
	I0317 11:02:32.485824  271403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa Username:docker}
	I0317 11:02:32.657328  271403 ssh_runner.go:195] Run: systemctl --version
	I0317 11:02:32.661411  271403 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0317 11:02:32.665794  271403 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0317 11:02:32.689140  271403 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0317 11:02:32.689229  271403 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 11:02:32.714533  271403 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0317 11:02:32.714561  271403 start.go:495] detecting cgroup driver to use...
	I0317 11:02:32.714602  271403 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0317 11:02:32.714651  271403 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 11:02:32.726430  271403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 11:02:32.736704  271403 docker.go:217] disabling cri-docker service (if available) ...
	I0317 11:02:32.736750  271403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 11:02:32.749237  271403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 11:02:32.762021  271403 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 11:02:32.837408  271403 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 11:02:32.915411  271403 docker.go:233] disabling docker service ...
	I0317 11:02:32.915475  271403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 11:02:32.934753  271403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 11:02:32.945339  271403 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 11:02:33.026602  271403 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 11:02:33.105023  271403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 11:02:33.115410  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 11:02:33.130129  271403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 11:02:33.139140  271403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 11:02:33.148241  271403 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 11:02:33.148304  271403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 11:02:33.156976  271403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:02:33.165716  271403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 11:02:33.174440  271403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:02:33.183153  271403 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 11:02:33.191608  271403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 11:02:33.200222  271403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 11:02:33.208828  271403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 11:02:33.217773  271403 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 11:02:33.225411  271403 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 11:02:33.233211  271403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:02:33.313024  271403 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 11:02:33.412133  271403 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0317 11:02:33.412208  271403 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0317 11:02:33.415675  271403 start.go:563] Will wait 60s for crictl version
	I0317 11:02:33.415723  271403 ssh_runner.go:195] Run: which crictl
	I0317 11:02:33.418802  271403 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 11:02:33.454942  271403 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.25
	RuntimeApiVersion:  v1
	I0317 11:02:33.455012  271403 ssh_runner.go:195] Run: containerd --version
	I0317 11:02:33.477807  271403 ssh_runner.go:195] Run: containerd --version
	I0317 11:02:33.501834  271403 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
	I0317 11:02:31.533659  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:33.534559  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:33.502865  271403 cli_runner.go:164] Run: docker network inspect calico-236437 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 11:02:33.521053  271403 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0317 11:02:33.524629  271403 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:02:33.535881  271403 kubeadm.go:883] updating cluster {Name:calico-236437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:calico-236437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 11:02:33.536009  271403 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:02:33.536072  271403 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 11:02:33.567514  271403 containerd.go:627] all images are preloaded for containerd runtime.
	I0317 11:02:33.567533  271403 containerd.go:534] Images already preloaded, skipping extraction
	I0317 11:02:33.567587  271403 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 11:02:33.598171  271403 containerd.go:627] all images are preloaded for containerd runtime.
	I0317 11:02:33.598192  271403 cache_images.go:84] Images are preloaded, skipping loading
	I0317 11:02:33.598199  271403 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.2 containerd true true} ...
	I0317 11:02:33.598293  271403 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-236437 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:calico-236437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0317 11:02:33.598353  271403 ssh_runner.go:195] Run: sudo crictl info
	I0317 11:02:33.630316  271403 cni.go:84] Creating CNI manager for "calico"
	I0317 11:02:33.630339  271403 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 11:02:33.630359  271403 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-236437 NodeName:calico-236437 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 11:02:33.630477  271403 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "calico-236437"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 11:02:33.630528  271403 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 11:02:33.638862  271403 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 11:02:33.638928  271403 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 11:02:33.647870  271403 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0317 11:02:33.664419  271403 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 11:02:33.680721  271403 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2303 bytes)
	I0317 11:02:33.697486  271403 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0317 11:02:33.700806  271403 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:02:33.710885  271403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:02:33.789041  271403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:02:33.801846  271403 certs.go:68] Setting up /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437 for IP: 192.168.85.2
	I0317 11:02:33.801877  271403 certs.go:194] generating shared ca certs ...
	I0317 11:02:33.801896  271403 certs.go:226] acquiring lock for ca certs: {Name:mkf58624c63680e02907d28348d45986283847c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:02:33.802058  271403 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20535-4918/.minikube/ca.key
	I0317 11:02:33.802123  271403 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.key
	I0317 11:02:33.802137  271403 certs.go:256] generating profile certs ...
	I0317 11:02:33.802202  271403 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/client.key
	I0317 11:02:33.802228  271403 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/client.crt with IP's: []
	I0317 11:02:33.992607  271403 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/client.crt ...
	I0317 11:02:33.992636  271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/client.crt: {Name:mkb52ca2b7d5614e9a99d0baa0ecbebaddb0cc98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:02:33.992801  271403 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/client.key ...
	I0317 11:02:33.992819  271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/client.key: {Name:mk35db6f772b5eb0d0f9eef0f32d9e01b2c6129c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:02:33.992895  271403 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.key.916c13d4
	I0317 11:02:33.992909  271403 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.crt.916c13d4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0317 11:02:34.206081  271403 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.crt.916c13d4 ...
	I0317 11:02:34.206116  271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.crt.916c13d4: {Name:mk106a12a3266907a0c64fdec49d2d65cff8ef4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:02:34.206307  271403 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.key.916c13d4 ...
	I0317 11:02:34.206328  271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.key.916c13d4: {Name:mkb761c01ac7dd169e99815f4912e839650faba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:02:34.206446  271403 certs.go:381] copying /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.crt.916c13d4 -> /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.crt
	I0317 11:02:34.206543  271403 certs.go:385] copying /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.key.916c13d4 -> /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.key
	I0317 11:02:34.206635  271403 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.key
	I0317 11:02:34.206657  271403 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.crt with IP's: []
	I0317 11:02:34.324068  271403 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.crt ...
	I0317 11:02:34.324097  271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.crt: {Name:mk823c22b3bc8a80bc3c82b282af79b6abc16d96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:02:34.324254  271403 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.key ...
	I0317 11:02:34.324267  271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.key: {Name:mk875be3f1f3630e7e6086d3ef46f0bec9649fb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:02:34.324420  271403 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690.pem (1338 bytes)
	W0317 11:02:34.324451  271403 certs.go:480] ignoring /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690_empty.pem, impossibly tiny 0 bytes
	I0317 11:02:34.324461  271403 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 11:02:34.324494  271403 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem (1082 bytes)
	I0317 11:02:34.324524  271403 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem (1123 bytes)
	I0317 11:02:34.324558  271403 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem (1679 bytes)
	I0317 11:02:34.324619  271403 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem (1708 bytes)
	I0317 11:02:34.325244  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 11:02:34.348013  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 11:02:34.369328  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 11:02:34.391242  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 11:02:34.413233  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0317 11:02:34.434100  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 11:02:34.458186  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 11:02:34.481676  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0317 11:02:34.505221  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 11:02:34.527325  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690.pem --> /usr/share/ca-certificates/11690.pem (1338 bytes)
	I0317 11:02:34.551519  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem --> /usr/share/ca-certificates/116902.pem (1708 bytes)
	I0317 11:02:34.572901  271403 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 11:02:34.588811  271403 ssh_runner.go:195] Run: openssl version
	I0317 11:02:34.593841  271403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11690.pem && ln -fs /usr/share/ca-certificates/11690.pem /etc/ssl/certs/11690.pem"
	I0317 11:02:34.602126  271403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11690.pem
	I0317 11:02:34.605246  271403 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:32 /usr/share/ca-certificates/11690.pem
	I0317 11:02:34.605299  271403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11690.pem
	I0317 11:02:34.611760  271403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11690.pem /etc/ssl/certs/51391683.0"
	I0317 11:02:34.619902  271403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116902.pem && ln -fs /usr/share/ca-certificates/116902.pem /etc/ssl/certs/116902.pem"
	I0317 11:02:34.627931  271403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116902.pem
	I0317 11:02:34.631011  271403 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:32 /usr/share/ca-certificates/116902.pem
	I0317 11:02:34.631053  271403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116902.pem
	I0317 11:02:34.637206  271403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116902.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 11:02:34.646079  271403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 11:02:34.654752  271403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:02:34.657906  271403 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:02:34.657954  271403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:02:34.664388  271403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 11:02:34.673111  271403 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 11:02:34.676159  271403 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 11:02:34.676200  271403 kubeadm.go:392] StartCluster: {Name:calico-236437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:calico-236437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:02:34.676252  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0317 11:02:34.676286  271403 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 11:02:34.710371  271403 cri.go:89] found id: ""
	I0317 11:02:34.710443  271403 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 11:02:34.720254  271403 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 11:02:34.728439  271403 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0317 11:02:34.728511  271403 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 11:02:34.736684  271403 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 11:02:34.736699  271403 kubeadm.go:157] found existing configuration files:
	
	I0317 11:02:34.736730  271403 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 11:02:34.744549  271403 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 11:02:34.744604  271403 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 11:02:34.752129  271403 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 11:02:34.760012  271403 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 11:02:34.760069  271403 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 11:02:34.767476  271403 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 11:02:34.775057  271403 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 11:02:34.775105  271403 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 11:02:34.782810  271403 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 11:02:34.790578  271403 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 11:02:34.790624  271403 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 11:02:34.797888  271403 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0317 11:02:34.833333  271403 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 11:02:34.833405  271403 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 11:02:34.849583  271403 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0317 11:02:34.849687  271403 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0317 11:02:34.849745  271403 kubeadm.go:310] OS: Linux
	I0317 11:02:34.849817  271403 kubeadm.go:310] CGROUPS_CPU: enabled
	I0317 11:02:34.849899  271403 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0317 11:02:34.849997  271403 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0317 11:02:34.850078  271403 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0317 11:02:34.850154  271403 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0317 11:02:34.850217  271403 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0317 11:02:34.850265  271403 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0317 11:02:34.850312  271403 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0317 11:02:34.850353  271403 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0317 11:02:34.904813  271403 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 11:02:34.904974  271403 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 11:02:34.905103  271403 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 11:02:34.909905  271403 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 11:02:32.037038  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:34.537345  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:33.148942  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:35.648977  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:34.911531  271403 out.go:235]   - Generating certificates and keys ...
	I0317 11:02:34.911635  271403 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 11:02:34.911736  271403 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 11:02:35.268722  271403 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 11:02:35.468484  271403 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 11:02:35.769348  271403 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 11:02:35.993040  271403 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 11:02:36.202807  271403 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 11:02:36.203004  271403 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-236437 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0317 11:02:36.280951  271403 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 11:02:36.281084  271403 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-236437 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0317 11:02:36.463620  271403 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 11:02:36.510242  271403 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 11:02:36.900000  271403 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 11:02:36.900111  271403 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 11:02:37.075436  271403 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 11:02:37.263196  271403 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 11:02:37.642492  271403 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 11:02:37.737086  271403 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 11:02:38.040875  271403 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 11:02:38.041549  271403 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 11:02:38.043872  271403 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 11:02:36.034091  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:38.533914  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:38.045834  271403 out.go:235]   - Booting up control plane ...
	I0317 11:02:38.045950  271403 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 11:02:38.046019  271403 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 11:02:38.046719  271403 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 11:02:38.056299  271403 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 11:02:38.061457  271403 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 11:02:38.061534  271403 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 11:02:38.143998  271403 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 11:02:38.144138  271403 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 11:02:38.645417  271403 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.431671ms
	I0317 11:02:38.645515  271403 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 11:02:37.037283  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:39.537378  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:41.537760  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:37.649404  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:40.148990  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:43.147383  271403 kubeadm.go:310] [api-check] The API server is healthy after 4.501934621s
	I0317 11:02:43.158723  271403 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 11:02:43.168464  271403 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 11:02:43.184339  271403 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 11:02:43.184609  271403 kubeadm.go:310] [mark-control-plane] Marking the node calico-236437 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 11:02:43.191081  271403 kubeadm.go:310] [bootstrap-token] Using token: mixhu0.4ggx0rlksl4xdr10
	I0317 11:02:40.534081  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:42.534658  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:43.192582  271403 out.go:235]   - Configuring RBAC rules ...
	I0317 11:02:43.192739  271403 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 11:02:43.196215  271403 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 11:02:43.200588  271403 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 11:02:43.202942  271403 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 11:02:43.205272  271403 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 11:02:43.207452  271403 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 11:02:43.553368  271403 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 11:02:43.969959  271403 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 11:02:44.553346  271403 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 11:02:44.554242  271403 kubeadm.go:310] 
	I0317 11:02:44.554342  271403 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 11:02:44.554359  271403 kubeadm.go:310] 
	I0317 11:02:44.554471  271403 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 11:02:44.554492  271403 kubeadm.go:310] 
	I0317 11:02:44.554522  271403 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 11:02:44.554611  271403 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 11:02:44.554704  271403 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 11:02:44.554722  271403 kubeadm.go:310] 
	I0317 11:02:44.554806  271403 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 11:02:44.554816  271403 kubeadm.go:310] 
	I0317 11:02:44.554894  271403 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 11:02:44.554903  271403 kubeadm.go:310] 
	I0317 11:02:44.554993  271403 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 11:02:44.555106  271403 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 11:02:44.555207  271403 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 11:02:44.555217  271403 kubeadm.go:310] 
	I0317 11:02:44.555395  271403 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 11:02:44.555506  271403 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 11:02:44.555523  271403 kubeadm.go:310] 
	I0317 11:02:44.555637  271403 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mixhu0.4ggx0rlksl4xdr10 \
	I0317 11:02:44.555775  271403 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fbbd8e832ea7aa08371d4fcc88b71c8e29c98bed7a9a4feed9bf5043f7b52578 \
	I0317 11:02:44.555807  271403 kubeadm.go:310] 	--control-plane 
	I0317 11:02:44.555816  271403 kubeadm.go:310] 
	I0317 11:02:44.555924  271403 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 11:02:44.555932  271403 kubeadm.go:310] 
	I0317 11:02:44.556026  271403 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mixhu0.4ggx0rlksl4xdr10 \
	I0317 11:02:44.556149  271403 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fbbd8e832ea7aa08371d4fcc88b71c8e29c98bed7a9a4feed9bf5043f7b52578 
	I0317 11:02:44.558534  271403 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0317 11:02:44.558760  271403 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0317 11:02:44.558854  271403 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 11:02:44.558879  271403 cni.go:84] Creating CNI manager for "calico"
	I0317 11:02:44.561122  271403 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0317 11:02:44.562673  271403 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0317 11:02:44.562695  271403 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (324369 bytes)
	I0317 11:02:44.581949  271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0317 11:02:44.036780  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:46.036815  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:42.649172  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:44.649482  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:47.148798  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:45.843315  271403 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.261329329s)
	I0317 11:02:45.843361  271403 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 11:02:45.843456  271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:02:45.843478  271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-236437 minikube.k8s.io/updated_at=2025_03_17T11_02_45_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76 minikube.k8s.io/name=calico-236437 minikube.k8s.io/primary=true
	I0317 11:02:45.850707  271403 ops.go:34] apiserver oom_adj: -16
	I0317 11:02:45.948147  271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:02:46.448502  271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:02:46.949084  271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:02:47.449157  271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:02:47.948285  271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:02:48.448265  271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:02:48.949124  271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:02:49.015125  271403 kubeadm.go:1113] duration metric: took 3.171736497s to wait for elevateKubeSystemPrivileges
	I0317 11:02:49.015169  271403 kubeadm.go:394] duration metric: took 14.338970216s to StartCluster
	I0317 11:02:49.015191  271403 settings.go:142] acquiring lock: {Name:mk2a57d556efff40ccd4336229d7a78216b861f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:02:49.015295  271403 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 11:02:49.016764  271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/kubeconfig: {Name:mk686b9f6159ab958672b945ae0aa5a9c96e9ecc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:02:49.017020  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 11:02:49.017025  271403 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 11:02:49.017094  271403 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 11:02:49.017190  271403 addons.go:69] Setting storage-provisioner=true in profile "calico-236437"
	I0317 11:02:49.017214  271403 addons.go:238] Setting addon storage-provisioner=true in "calico-236437"
	I0317 11:02:49.017235  271403 addons.go:69] Setting default-storageclass=true in profile "calico-236437"
	I0317 11:02:49.017249  271403 config.go:182] Loaded profile config "calico-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:02:49.017263  271403 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-236437"
	I0317 11:02:49.017336  271403 host.go:66] Checking if "calico-236437" exists ...
	I0317 11:02:49.017645  271403 cli_runner.go:164] Run: docker container inspect calico-236437 --format={{.State.Status}}
	I0317 11:02:49.017831  271403 cli_runner.go:164] Run: docker container inspect calico-236437 --format={{.State.Status}}
	I0317 11:02:49.018669  271403 out.go:177] * Verifying Kubernetes components...
	I0317 11:02:49.019970  271403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:02:49.043863  271403 addons.go:238] Setting addon default-storageclass=true in "calico-236437"
	I0317 11:02:49.043916  271403 host.go:66] Checking if "calico-236437" exists ...
	I0317 11:02:49.044307  271403 cli_runner.go:164] Run: docker container inspect calico-236437 --format={{.State.Status}}
	I0317 11:02:49.044516  271403 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:02:45.035232  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:47.533353  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:49.045642  271403 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 11:02:49.045662  271403 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 11:02:49.045707  271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
	I0317 11:02:49.074641  271403 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 11:02:49.074679  271403 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 11:02:49.074683  271403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa Username:docker}
	I0317 11:02:49.074750  271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
	I0317 11:02:49.092825  271403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa Username:docker}
	I0317 11:02:49.146609  271403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:02:49.146645  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 11:02:49.231557  271403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 11:02:49.512613  271403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 11:02:49.840101  271403 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0317 11:02:49.841256  271403 node_ready.go:35] waiting up to 15m0s for node "calico-236437" to be "Ready" ...
	I0317 11:02:49.904604  271403 node_ready.go:49] node "calico-236437" has status "Ready":"True"
	I0317 11:02:49.904627  271403 node_ready.go:38] duration metric: took 63.34338ms for node "calico-236437" to be "Ready" ...
	I0317 11:02:49.904637  271403 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:02:49.907969  271403 pod_ready.go:79] waiting up to 15m0s for pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace to be "Ready" ...
	I0317 11:02:50.110000  271403 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0317 11:02:48.037463  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:50.037520  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:49.149631  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:51.648685  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:49.534616  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:52.034129  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:50.111234  271403 addons.go:514] duration metric: took 1.094138366s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0317 11:02:50.344618  271403 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-236437" context rescaled to 1 replicas
	I0317 11:02:51.912894  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:53.913540  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:52.537453  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:55.036479  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:54.148382  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:56.648833  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:54.533694  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:56.533724  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:58.534453  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:56.413348  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:58.912802  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:57.037090  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:59.538524  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:59.147863  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:01.148827  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:01.033848  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:03.033885  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:00.913484  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:03.413309  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:02.037409  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:04.537527  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:03.648469  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:06.148443  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:05.533183  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:07.534316  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:05.912288  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:07.913513  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:07.037320  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:09.037403  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:11.537099  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:08.148993  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:10.149164  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:10.034575  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:12.534405  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:10.413225  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:12.912794  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:13.537150  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:16.036722  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:12.648921  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:15.148704  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:15.033258  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:17.034293  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:14.913329  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:17.412933  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:18.037375  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:20.536741  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:17.649237  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:20.148773  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:19.533985  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:22.033205  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:24.033479  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:19.912177  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:21.913651  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:24.413065  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:22.537000  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:25.036714  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:22.648948  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:25.148737  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:26.534711  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:29.032989  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:26.413616  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:28.913818  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:27.037167  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:29.537027  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:31.537071  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:27.648894  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:30.148407  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:32.149154  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:31.034371  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:33.533651  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:31.412984  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:33.413031  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:33.537243  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:36.036866  245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:34.648908  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:37.149211  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:35.534459  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:38.034643  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:35.420991  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:37.913715  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:37.537340  245681 pod_ready.go:82] duration metric: took 4m0.005543433s for pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace to be "Ready" ...
	E0317 11:03:37.537353  245681 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0317 11:03:37.537374  245681 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-507725" in "kube-system" namespace to be "Ready" ...
	I0317 11:03:37.540817  245681 pod_ready.go:93] pod "etcd-pause-507725" in "kube-system" namespace has status "Ready":"True"
	I0317 11:03:37.540828  245681 pod_ready.go:82] duration metric: took 3.446936ms for pod "etcd-pause-507725" in "kube-system" namespace to be "Ready" ...
	I0317 11:03:37.540841  245681 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-507725" in "kube-system" namespace to be "Ready" ...
	I0317 11:03:37.544051  245681 pod_ready.go:93] pod "kube-apiserver-pause-507725" in "kube-system" namespace has status "Ready":"True"
	I0317 11:03:37.544059  245681 pod_ready.go:82] duration metric: took 3.212331ms for pod "kube-apiserver-pause-507725" in "kube-system" namespace to be "Ready" ...
	I0317 11:03:37.544066  245681 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-507725" in "kube-system" namespace to be "Ready" ...
	I0317 11:03:37.547376  245681 pod_ready.go:93] pod "kube-controller-manager-pause-507725" in "kube-system" namespace has status "Ready":"True"
	I0317 11:03:37.547385  245681 pod_ready.go:82] duration metric: took 3.313908ms for pod "kube-controller-manager-pause-507725" in "kube-system" namespace to be "Ready" ...
	I0317 11:03:37.547394  245681 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lmh8d" in "kube-system" namespace to be "Ready" ...
	I0317 11:03:37.550390  245681 pod_ready.go:93] pod "kube-proxy-lmh8d" in "kube-system" namespace has status "Ready":"True"
	I0317 11:03:37.550397  245681 pod_ready.go:82] duration metric: took 2.998178ms for pod "kube-proxy-lmh8d" in "kube-system" namespace to be "Ready" ...
	I0317 11:03:37.550402  245681 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-507725" in "kube-system" namespace to be "Ready" ...
	I0317 11:03:37.935598  245681 pod_ready.go:93] pod "kube-scheduler-pause-507725" in "kube-system" namespace has status "Ready":"True"
	I0317 11:03:37.935609  245681 pod_ready.go:82] duration metric: took 385.202448ms for pod "kube-scheduler-pause-507725" in "kube-system" namespace to be "Ready" ...
	I0317 11:03:37.935615  245681 pod_ready.go:39] duration metric: took 4m2.410016367s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:03:37.935635  245681 api_server.go:52] waiting for apiserver process to appear ...
	I0317 11:03:37.935665  245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:03:37.935716  245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:03:37.970327  245681 cri.go:89] found id: "d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01"
	I0317 11:03:37.970343  245681 cri.go:89] found id: ""
	I0317 11:03:37.970351  245681 logs.go:282] 1 containers: [d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01]
	I0317 11:03:37.970412  245681 ssh_runner.go:195] Run: which crictl
	I0317 11:03:37.974106  245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:03:37.974150  245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:03:38.007043  245681 cri.go:89] found id: "5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2"
	I0317 11:03:38.007057  245681 cri.go:89] found id: ""
	I0317 11:03:38.007071  245681 logs.go:282] 1 containers: [5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2]
	I0317 11:03:38.007112  245681 ssh_runner.go:195] Run: which crictl
	I0317 11:03:38.010476  245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:03:38.010521  245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:03:38.043432  245681 cri.go:89] found id: ""
	I0317 11:03:38.043447  245681 logs.go:282] 0 containers: []
	W0317 11:03:38.043455  245681 logs.go:284] No container was found matching "coredns"
	I0317 11:03:38.043460  245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:03:38.043513  245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:03:38.076008  245681 cri.go:89] found id: "d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373"
	I0317 11:03:38.076021  245681 cri.go:89] found id: ""
	I0317 11:03:38.076027  245681 logs.go:282] 1 containers: [d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373]
	I0317 11:03:38.076071  245681 ssh_runner.go:195] Run: which crictl
	I0317 11:03:38.079322  245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:03:38.079381  245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:03:38.111550  245681 cri.go:89] found id: "491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c"
	I0317 11:03:38.111565  245681 cri.go:89] found id: ""
	I0317 11:03:38.111573  245681 logs.go:282] 1 containers: [491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c]
	I0317 11:03:38.111619  245681 ssh_runner.go:195] Run: which crictl
	I0317 11:03:38.114859  245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:03:38.114913  245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:03:38.147851  245681 cri.go:89] found id: "80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718"
	I0317 11:03:38.147866  245681 cri.go:89] found id: ""
	I0317 11:03:38.147874  245681 logs.go:282] 1 containers: [80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718]
	I0317 11:03:38.147928  245681 ssh_runner.go:195] Run: which crictl
	I0317 11:03:38.151478  245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:03:38.151520  245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:03:38.182883  245681 cri.go:89] found id: ""
	I0317 11:03:38.182896  245681 logs.go:282] 0 containers: []
	W0317 11:03:38.182902  245681 logs.go:284] No container was found matching "kindnet"
	I0317 11:03:38.182913  245681 logs.go:123] Gathering logs for kube-apiserver [d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01] ...
	I0317 11:03:38.182923  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01"
	I0317 11:03:38.224826  245681 logs.go:123] Gathering logs for kube-scheduler [d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373] ...
	I0317 11:03:38.224845  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373"
	I0317 11:03:38.268744  245681 logs.go:123] Gathering logs for kube-controller-manager [80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718] ...
	I0317 11:03:38.268764  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718"
	I0317 11:03:38.316908  245681 logs.go:123] Gathering logs for containerd ...
	I0317 11:03:38.316926  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:03:38.362274  245681 logs.go:123] Gathering logs for kubelet ...
	I0317 11:03:38.362294  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:03:38.458593  245681 logs.go:123] Gathering logs for etcd [5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2] ...
	I0317 11:03:38.458613  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2"
	I0317 11:03:38.498455  245681 logs.go:123] Gathering logs for kube-proxy [491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c] ...
	I0317 11:03:38.498475  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c"
	I0317 11:03:38.533550  245681 logs.go:123] Gathering logs for container status ...
	I0317 11:03:38.533573  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:03:38.569617  245681 logs.go:123] Gathering logs for dmesg ...
	I0317 11:03:38.569637  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:03:38.587868  245681 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:03:38.587884  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:03:41.171361  245681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 11:03:41.182605  245681 api_server.go:72] duration metric: took 4m6.106338016s to wait for apiserver process to appear ...
	I0317 11:03:41.182618  245681 api_server.go:88] waiting for apiserver healthz status ...
	I0317 11:03:41.182644  245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:03:41.182681  245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:03:41.215179  245681 cri.go:89] found id: "d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01"
	I0317 11:03:41.215193  245681 cri.go:89] found id: ""
	I0317 11:03:41.215200  245681 logs.go:282] 1 containers: [d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01]
	I0317 11:03:41.215339  245681 ssh_runner.go:195] Run: which crictl
	I0317 11:03:41.218687  245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:03:41.218741  245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:03:41.250752  245681 cri.go:89] found id: "5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2"
	I0317 11:03:41.250768  245681 cri.go:89] found id: ""
	I0317 11:03:41.250775  245681 logs.go:282] 1 containers: [5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2]
	I0317 11:03:41.250826  245681 ssh_runner.go:195] Run: which crictl
	I0317 11:03:41.254355  245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:03:41.254410  245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:03:41.287190  245681 cri.go:89] found id: ""
	I0317 11:03:41.287208  245681 logs.go:282] 0 containers: []
	W0317 11:03:41.287218  245681 logs.go:284] No container was found matching "coredns"
	I0317 11:03:41.287225  245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:03:41.287329  245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:03:41.320268  245681 cri.go:89] found id: "d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373"
	I0317 11:03:41.320283  245681 cri.go:89] found id: ""
	I0317 11:03:41.320293  245681 logs.go:282] 1 containers: [d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373]
	I0317 11:03:41.320351  245681 ssh_runner.go:195] Run: which crictl
	I0317 11:03:41.323878  245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:03:41.323935  245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:03:41.355657  245681 cri.go:89] found id: "491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c"
	I0317 11:03:41.355668  245681 cri.go:89] found id: ""
	I0317 11:03:41.355674  245681 logs.go:282] 1 containers: [491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c]
	I0317 11:03:41.355714  245681 ssh_runner.go:195] Run: which crictl
	I0317 11:03:41.358944  245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:03:41.359001  245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:03:41.391119  245681 cri.go:89] found id: "80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718"
	I0317 11:03:41.391133  245681 cri.go:89] found id: ""
	I0317 11:03:41.391141  245681 logs.go:282] 1 containers: [80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718]
	I0317 11:03:41.391188  245681 ssh_runner.go:195] Run: which crictl
	I0317 11:03:41.394575  245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:03:41.394626  245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:03:41.428649  245681 cri.go:89] found id: ""
	I0317 11:03:41.428661  245681 logs.go:282] 0 containers: []
	W0317 11:03:41.428667  245681 logs.go:284] No container was found matching "kindnet"
	I0317 11:03:41.428677  245681 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:03:41.428688  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:03:41.512438  245681 logs.go:123] Gathering logs for etcd [5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2] ...
	I0317 11:03:41.512458  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2"
	I0317 11:03:41.552239  245681 logs.go:123] Gathering logs for kube-proxy [491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c] ...
	I0317 11:03:41.552256  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c"
	I0317 11:03:41.586185  245681 logs.go:123] Gathering logs for containerd ...
	I0317 11:03:41.586200  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:03:41.628142  245681 logs.go:123] Gathering logs for dmesg ...
	I0317 11:03:41.628159  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:03:41.646089  245681 logs.go:123] Gathering logs for kube-apiserver [d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01] ...
	I0317 11:03:41.646106  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01"
	I0317 11:03:41.685793  245681 logs.go:123] Gathering logs for kube-scheduler [d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373] ...
	I0317 11:03:41.685809  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373"
	I0317 11:03:41.728438  245681 logs.go:123] Gathering logs for kube-controller-manager [80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718] ...
	I0317 11:03:41.728455  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718"
	I0317 11:03:41.773094  245681 logs.go:123] Gathering logs for container status ...
	I0317 11:03:41.773111  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:03:39.149731  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:41.648838  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:40.533971  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:42.534469  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:40.412687  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:42.413498  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:41.807752  245681 logs.go:123] Gathering logs for kubelet ...
	I0317 11:03:41.807768  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:03:44.399144  245681 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0317 11:03:44.402976  245681 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0317 11:03:44.404151  245681 api_server.go:141] control plane version: v1.32.2
	I0317 11:03:44.404169  245681 api_server.go:131] duration metric: took 3.221544822s to wait for apiserver health ...
	I0317 11:03:44.404178  245681 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 11:03:44.404201  245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:03:44.404249  245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:03:44.437034  245681 cri.go:89] found id: "d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01"
	I0317 11:03:44.437050  245681 cri.go:89] found id: ""
	I0317 11:03:44.437056  245681 logs.go:282] 1 containers: [d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01]
	I0317 11:03:44.437103  245681 ssh_runner.go:195] Run: which crictl
	I0317 11:03:44.440667  245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:03:44.440724  245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:03:44.474416  245681 cri.go:89] found id: "5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2"
	I0317 11:03:44.474427  245681 cri.go:89] found id: ""
	I0317 11:03:44.474433  245681 logs.go:282] 1 containers: [5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2]
	I0317 11:03:44.474491  245681 ssh_runner.go:195] Run: which crictl
	I0317 11:03:44.478025  245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:03:44.478078  245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:03:44.510846  245681 cri.go:89] found id: ""
	I0317 11:03:44.510862  245681 logs.go:282] 0 containers: []
	W0317 11:03:44.510868  245681 logs.go:284] No container was found matching "coredns"
	I0317 11:03:44.510873  245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:03:44.510916  245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:03:44.545105  245681 cri.go:89] found id: "d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373"
	I0317 11:03:44.545116  245681 cri.go:89] found id: ""
	I0317 11:03:44.545121  245681 logs.go:282] 1 containers: [d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373]
	I0317 11:03:44.545164  245681 ssh_runner.go:195] Run: which crictl
	I0317 11:03:44.548666  245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:03:44.548712  245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:03:44.580776  245681 cri.go:89] found id: "491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c"
	I0317 11:03:44.580793  245681 cri.go:89] found id: ""
	I0317 11:03:44.580844  245681 logs.go:282] 1 containers: [491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c]
	I0317 11:03:44.580891  245681 ssh_runner.go:195] Run: which crictl
	I0317 11:03:44.584414  245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:03:44.584460  245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:03:44.616416  245681 cri.go:89] found id: "80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718"
	I0317 11:03:44.616430  245681 cri.go:89] found id: ""
	I0317 11:03:44.616438  245681 logs.go:282] 1 containers: [80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718]
	I0317 11:03:44.616488  245681 ssh_runner.go:195] Run: which crictl
	I0317 11:03:44.619818  245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:03:44.619870  245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:03:44.653683  245681 cri.go:89] found id: ""
	I0317 11:03:44.653695  245681 logs.go:282] 0 containers: []
	W0317 11:03:44.653702  245681 logs.go:284] No container was found matching "kindnet"
	I0317 11:03:44.653713  245681 logs.go:123] Gathering logs for kube-proxy [491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c] ...
	I0317 11:03:44.653723  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c"
	I0317 11:03:44.688280  245681 logs.go:123] Gathering logs for kube-controller-manager [80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718] ...
	I0317 11:03:44.688295  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718"
	I0317 11:03:44.737319  245681 logs.go:123] Gathering logs for dmesg ...
	I0317 11:03:44.737337  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:03:44.756391  245681 logs.go:123] Gathering logs for kube-apiserver [d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01] ...
	I0317 11:03:44.756405  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01"
	I0317 11:03:44.795981  245681 logs.go:123] Gathering logs for containerd ...
	I0317 11:03:44.796001  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:03:44.838624  245681 logs.go:123] Gathering logs for container status ...
	I0317 11:03:44.838641  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:03:44.875163  245681 logs.go:123] Gathering logs for kubelet ...
	I0317 11:03:44.875189  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:03:44.964409  245681 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:03:44.964429  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:03:45.046002  245681 logs.go:123] Gathering logs for etcd [5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2] ...
	I0317 11:03:45.046017  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2"
	I0317 11:03:45.084825  245681 logs.go:123] Gathering logs for kube-scheduler [d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373] ...
	I0317 11:03:45.084842  245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373"
	I0317 11:03:44.148415  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:46.148535  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:45.033698  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:47.533224  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:44.912999  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:47.412832  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:49.413266  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:47.629139  245681 system_pods.go:59] 7 kube-system pods found
	I0317 11:03:47.629170  245681 system_pods.go:61] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:03:47.629175  245681 system_pods.go:61] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:03:47.629184  245681 system_pods.go:61] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:03:47.629187  245681 system_pods.go:61] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:03:47.629190  245681 system_pods.go:61] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:03:47.629193  245681 system_pods.go:61] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:03:47.629195  245681 system_pods.go:61] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:03:47.629200  245681 system_pods.go:74] duration metric: took 3.225017966s to wait for pod list to return data ...
	I0317 11:03:47.629206  245681 default_sa.go:34] waiting for default service account to be created ...
	I0317 11:03:47.631444  245681 default_sa.go:45] found service account: "default"
	I0317 11:03:47.631456  245681 default_sa.go:55] duration metric: took 2.245448ms for default service account to be created ...
	I0317 11:03:47.631462  245681 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 11:03:47.633680  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:03:47.633694  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:03:47.633698  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:03:47.633703  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:03:47.633707  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:03:47.633710  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:03:47.633713  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:03:47.633715  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:03:47.633740  245681 retry.go:31] will retry after 208.624093ms: missing components: kube-dns
	I0317 11:03:47.845983  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:03:47.846001  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:03:47.846005  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:03:47.846011  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:03:47.846014  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:03:47.846017  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:03:47.846020  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:03:47.846022  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:03:47.846034  245681 retry.go:31] will retry after 322.393506ms: missing components: kube-dns
	I0317 11:03:48.172551  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:03:48.172572  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:03:48.172576  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:03:48.172582  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:03:48.172585  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:03:48.172589  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:03:48.172592  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:03:48.172596  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:03:48.172607  245681 retry.go:31] will retry after 329.587841ms: missing components: kube-dns
	I0317 11:03:48.507513  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:03:48.507529  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:03:48.507534  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:03:48.507541  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:03:48.507545  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:03:48.507548  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:03:48.507551  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:03:48.507553  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:03:48.507564  245681 retry.go:31] will retry after 486.130076ms: missing components: kube-dns
	I0317 11:03:48.996755  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:03:48.996784  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:03:48.996788  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:03:48.996795  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:03:48.996798  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:03:48.996801  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:03:48.996803  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:03:48.996808  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:03:48.996821  245681 retry.go:31] will retry after 594.939063ms: missing components: kube-dns
	I0317 11:03:49.595554  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:03:49.595573  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:03:49.595577  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:03:49.595583  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:03:49.595586  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:03:49.595589  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:03:49.595592  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:03:49.595594  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:03:49.595605  245681 retry.go:31] will retry after 584.315761ms: missing components: kube-dns
	I0317 11:03:50.183549  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:03:50.183577  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:03:50.183581  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:03:50.183587  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:03:50.183590  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:03:50.183593  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:03:50.183595  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:03:50.183597  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:03:50.183611  245681 retry.go:31] will retry after 818.942859ms: missing components: kube-dns
	I0317 11:03:51.006535  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:03:51.006552  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:03:51.006556  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:03:51.006562  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:03:51.006565  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:03:51.006568  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:03:51.006570  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:03:51.006572  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:03:51.006583  245681 retry.go:31] will retry after 1.023904266s: missing components: kube-dns
	I0317 11:03:48.148792  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:50.649053  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:49.533914  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:52.033719  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:51.913217  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:53.913804  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:52.034391  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:03:52.034407  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:03:52.034411  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:03:52.034418  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:03:52.034423  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:03:52.034426  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:03:52.034430  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:03:52.034432  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:03:52.034443  245681 retry.go:31] will retry after 1.438418964s: missing components: kube-dns
	I0317 11:03:53.477096  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:03:53.477115  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:03:53.477119  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:03:53.477125  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:03:53.477128  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:03:53.477131  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:03:53.477133  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:03:53.477136  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:03:53.477147  245681 retry.go:31] will retry after 1.706517056s: missing components: kube-dns
	I0317 11:03:55.187542  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:03:55.187561  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:03:55.187567  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:03:55.187574  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:03:55.187577  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:03:55.187580  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:03:55.187582  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:03:55.187584  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:03:55.187596  245681 retry.go:31] will retry after 2.016724605s: missing components: kube-dns
	I0317 11:03:52.649095  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:55.148710  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:57.149810  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:54.533660  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:57.034175  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:56.413532  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:58.913374  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:57.209012  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:03:57.209030  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:03:57.209034  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:03:57.209040  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:03:57.209043  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:03:57.209046  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:03:57.209049  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:03:57.209051  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:03:57.209070  245681 retry.go:31] will retry after 2.863078821s: missing components: kube-dns
	I0317 11:04:00.077082  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:04:00.077102  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:04:00.077106  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:04:00.077112  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:04:00.077116  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:04:00.077119  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:04:00.077121  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:04:00.077123  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:04:00.077136  245681 retry.go:31] will retry after 3.357048609s: missing components: kube-dns
	I0317 11:03:59.648763  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:02.148762  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:59.533729  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:01.534202  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:03.536116  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:01.413655  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:03.413742  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:03.438019  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:04:03.438039  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:04:03.438044  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:04:03.438049  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:04:03.438053  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:04:03.438056  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:04:03.438060  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:04:03.438062  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:04:03.438075  245681 retry.go:31] will retry after 4.751945119s: missing components: kube-dns
	I0317 11:04:04.648885  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:06.649127  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:06.033695  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:08.534321  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:05.913392  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:08.412709  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:08.194256  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:04:08.194274  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:04:08.194278  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:04:08.194286  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:04:08.194289  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:04:08.194292  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:04:08.194294  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:04:08.194296  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:04:08.194307  245681 retry.go:31] will retry after 4.655703533s: missing components: kube-dns
	I0317 11:04:09.148372  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:11.149273  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:11.033836  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:13.034472  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:10.412969  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:12.413781  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:12.853750  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:04:12.853770  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:04:12.853776  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:04:12.853784  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:04:12.853788  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:04:12.853792  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:04:12.853796  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:04:12.853799  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:04:12.853813  245681 retry.go:31] will retry after 6.617216886s: missing components: kube-dns
	I0317 11:04:13.648760  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:16.150790  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:15.533314  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:17.533852  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:14.913146  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:17.413191  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:19.414688  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:19.474873  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:04:19.474894  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:04:19.474898  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:04:19.474904  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:04:19.474907  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:04:19.474913  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:04:19.474915  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:04:19.474917  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:04:19.474931  245681 retry.go:31] will retry after 7.39578455s: missing components: kube-dns
	I0317 11:04:18.648614  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:20.649479  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:20.033768  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:22.534210  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:21.913011  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:23.913212  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:23.148451  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:25.149265  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:25.033686  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:27.534095  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:25.913387  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:27.913564  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:26.874163  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:04:26.874182  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:04:26.874187  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:04:26.874195  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:04:26.874198  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:04:26.874201  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:04:26.874204  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:04:26.874206  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:04:26.874219  245681 retry.go:31] will retry after 12.601914902s: missing components: kube-dns
	I0317 11:04:27.648526  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:29.649214  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:31.649666  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:30.033770  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:32.533597  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:30.412714  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:32.413230  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:34.413482  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:34.148783  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:36.148832  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:34.533976  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:37.033284  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:39.033794  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:36.912932  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:38.913417  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:39.480299  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:04:39.480316  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:04:39.480320  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:04:39.480326  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:04:39.480329  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:04:39.480331  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:04:39.480334  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:04:39.480336  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:04:39.480349  245681 retry.go:31] will retry after 16.356369315s: missing components: kube-dns
	I0317 11:04:38.648736  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:40.648879  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:41.034493  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:43.533541  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:40.914920  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:43.412457  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:43.148696  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:45.648517  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:45.534005  255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:45.534028  255203 pod_ready.go:82] duration metric: took 4m0.005195322s for pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace to be "Ready" ...
	E0317 11:04:45.534037  255203 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0317 11:04:45.534043  255203 pod_ready.go:79] waiting up to 15m0s for pod "etcd-auto-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:04:45.537073  255203 pod_ready.go:93] pod "etcd-auto-236437" in "kube-system" namespace has status "Ready":"True"
	I0317 11:04:45.537096  255203 pod_ready.go:82] duration metric: took 3.045951ms for pod "etcd-auto-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:04:45.537110  255203 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-auto-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:04:45.540213  255203 pod_ready.go:93] pod "kube-apiserver-auto-236437" in "kube-system" namespace has status "Ready":"True"
	I0317 11:04:45.540229  255203 pod_ready.go:82] duration metric: took 3.112401ms for pod "kube-apiserver-auto-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:04:45.540238  255203 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-auto-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:04:45.543301  255203 pod_ready.go:93] pod "kube-controller-manager-auto-236437" in "kube-system" namespace has status "Ready":"True"
	I0317 11:04:45.543315  255203 pod_ready.go:82] duration metric: took 3.071405ms for pod "kube-controller-manager-auto-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:04:45.543323  255203 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-jcdsz" in "kube-system" namespace to be "Ready" ...
	I0317 11:04:45.546487  255203 pod_ready.go:93] pod "kube-proxy-jcdsz" in "kube-system" namespace has status "Ready":"True"
	I0317 11:04:45.546501  255203 pod_ready.go:82] duration metric: took 3.173334ms for pod "kube-proxy-jcdsz" in "kube-system" namespace to be "Ready" ...
	I0317 11:04:45.546507  255203 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-auto-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:04:45.932558  255203 pod_ready.go:93] pod "kube-scheduler-auto-236437" in "kube-system" namespace has status "Ready":"True"
	I0317 11:04:45.932579  255203 pod_ready.go:82] duration metric: took 386.066634ms for pod "kube-scheduler-auto-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:04:45.932587  255203 pod_ready.go:39] duration metric: took 4m2.409980263s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:04:45.932604  255203 api_server.go:52] waiting for apiserver process to appear ...
	I0317 11:04:45.932640  255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:04:45.932697  255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:04:45.965778  255203 cri.go:89] found id: "079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a"
	I0317 11:04:45.965803  255203 cri.go:89] found id: ""
	I0317 11:04:45.965811  255203 logs.go:282] 1 containers: [079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a]
	I0317 11:04:45.965866  255203 ssh_runner.go:195] Run: which crictl
	I0317 11:04:45.969834  255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:04:45.969906  255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:04:46.001786  255203 cri.go:89] found id: "1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc"
	I0317 11:04:46.001809  255203 cri.go:89] found id: ""
	I0317 11:04:46.001817  255203 logs.go:282] 1 containers: [1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc]
	I0317 11:04:46.001882  255203 ssh_runner.go:195] Run: which crictl
	I0317 11:04:46.005480  255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:04:46.005540  255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:04:46.036917  255203 cri.go:89] found id: ""
	I0317 11:04:46.036949  255203 logs.go:282] 0 containers: []
	W0317 11:04:46.036959  255203 logs.go:284] No container was found matching "coredns"
	I0317 11:04:46.036966  255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:04:46.037030  255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:04:46.070471  255203 cri.go:89] found id: "a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b"
	I0317 11:04:46.070495  255203 cri.go:89] found id: ""
	I0317 11:04:46.070502  255203 logs.go:282] 1 containers: [a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b]
	I0317 11:04:46.070548  255203 ssh_runner.go:195] Run: which crictl
	I0317 11:04:46.073947  255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:04:46.074013  255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:04:46.105813  255203 cri.go:89] found id: "a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e"
	I0317 11:04:46.105851  255203 cri.go:89] found id: ""
	I0317 11:04:46.105858  255203 logs.go:282] 1 containers: [a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e]
	I0317 11:04:46.105906  255203 ssh_runner.go:195] Run: which crictl
	I0317 11:04:46.109214  255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:04:46.109274  255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:04:46.141415  255203 cri.go:89] found id: "00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f"
	I0317 11:04:46.141437  255203 cri.go:89] found id: ""
	I0317 11:04:46.141446  255203 logs.go:282] 1 containers: [00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f]
	I0317 11:04:46.141505  255203 ssh_runner.go:195] Run: which crictl
	I0317 11:04:46.145603  255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:04:46.145667  255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:04:46.181315  255203 cri.go:89] found id: ""
	I0317 11:04:46.181339  255203 logs.go:282] 0 containers: []
	W0317 11:04:46.181348  255203 logs.go:284] No container was found matching "kindnet"
	I0317 11:04:46.181365  255203 logs.go:123] Gathering logs for dmesg ...
	I0317 11:04:46.181379  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:04:46.199524  255203 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:04:46.199555  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:04:46.284323  255203 logs.go:123] Gathering logs for kube-apiserver [079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a] ...
	I0317 11:04:46.284351  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a"
	I0317 11:04:46.324591  255203 logs.go:123] Gathering logs for etcd [1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc] ...
	I0317 11:04:46.324619  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc"
	I0317 11:04:46.361651  255203 logs.go:123] Gathering logs for kube-scheduler [a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b] ...
	I0317 11:04:46.361679  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b"
	I0317 11:04:46.401009  255203 logs.go:123] Gathering logs for kube-proxy [a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e] ...
	I0317 11:04:46.401039  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e"
	I0317 11:04:46.434852  255203 logs.go:123] Gathering logs for kube-controller-manager [00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f] ...
	I0317 11:04:46.434882  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f"
	I0317 11:04:46.482469  255203 logs.go:123] Gathering logs for container status ...
	I0317 11:04:46.482498  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:04:46.518409  255203 logs.go:123] Gathering logs for kubelet ...
	I0317 11:04:46.518439  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:04:46.610561  255203 logs.go:123] Gathering logs for containerd ...
	I0317 11:04:46.610595  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:04:49.156457  255203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 11:04:49.167188  255203 api_server.go:72] duration metric: took 4m6.341889458s to wait for apiserver process to appear ...
	I0317 11:04:49.167208  255203 api_server.go:88] waiting for apiserver healthz status ...
	I0317 11:04:49.167234  255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:04:49.167301  255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:04:49.198198  255203 cri.go:89] found id: "079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a"
	I0317 11:04:49.198227  255203 cri.go:89] found id: ""
	I0317 11:04:49.198237  255203 logs.go:282] 1 containers: [079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a]
	I0317 11:04:49.198301  255203 ssh_runner.go:195] Run: which crictl
	I0317 11:04:49.201745  255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:04:49.201804  255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:04:49.236410  255203 cri.go:89] found id: "1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc"
	I0317 11:04:49.236433  255203 cri.go:89] found id: ""
	I0317 11:04:49.236442  255203 logs.go:282] 1 containers: [1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc]
	I0317 11:04:49.236497  255203 ssh_runner.go:195] Run: which crictl
	I0317 11:04:49.240071  255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:04:49.240149  255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:04:49.273265  255203 cri.go:89] found id: ""
	I0317 11:04:49.273292  255203 logs.go:282] 0 containers: []
	W0317 11:04:49.273303  255203 logs.go:284] No container was found matching "coredns"
	I0317 11:04:49.273310  255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:04:49.273378  255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:04:49.304655  255203 cri.go:89] found id: "a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b"
	I0317 11:04:49.304679  255203 cri.go:89] found id: ""
	I0317 11:04:49.304689  255203 logs.go:282] 1 containers: [a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b]
	I0317 11:04:49.304736  255203 ssh_runner.go:195] Run: which crictl
	I0317 11:04:49.308178  255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:04:49.308234  255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:04:49.339992  255203 cri.go:89] found id: "a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e"
	I0317 11:04:49.340017  255203 cri.go:89] found id: ""
	I0317 11:04:49.340026  255203 logs.go:282] 1 containers: [a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e]
	I0317 11:04:49.340083  255203 ssh_runner.go:195] Run: which crictl
	I0317 11:04:49.343381  255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:04:49.343446  255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:04:49.375770  255203 cri.go:89] found id: "00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f"
	I0317 11:04:49.375791  255203 cri.go:89] found id: ""
	I0317 11:04:49.375800  255203 logs.go:282] 1 containers: [00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f]
	I0317 11:04:49.375865  255203 ssh_runner.go:195] Run: which crictl
	I0317 11:04:49.379083  255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:04:49.379144  255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:04:49.412093  255203 cri.go:89] found id: ""
	I0317 11:04:49.412119  255203 logs.go:282] 0 containers: []
	W0317 11:04:49.412130  255203 logs.go:284] No container was found matching "kindnet"
	I0317 11:04:49.412144  255203 logs.go:123] Gathering logs for dmesg ...
	I0317 11:04:49.412161  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:04:49.430275  255203 logs.go:123] Gathering logs for kube-apiserver [079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a] ...
	I0317 11:04:49.430306  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a"
	I0317 11:04:49.469418  255203 logs.go:123] Gathering logs for kube-scheduler [a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b] ...
	I0317 11:04:49.469446  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b"
	I0317 11:04:45.412555  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:47.912953  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:47.648598  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:49.649521  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:52.148897  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:49.509906  255203 logs.go:123] Gathering logs for kube-proxy [a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e] ...
	I0317 11:04:49.509936  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e"
	I0317 11:04:49.543381  255203 logs.go:123] Gathering logs for container status ...
	I0317 11:04:49.543409  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:04:49.578445  255203 logs.go:123] Gathering logs for kubelet ...
	I0317 11:04:49.578472  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:04:49.671453  255203 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:04:49.671484  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:04:49.752296  255203 logs.go:123] Gathering logs for etcd [1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc] ...
	I0317 11:04:49.752327  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc"
	I0317 11:04:49.789145  255203 logs.go:123] Gathering logs for kube-controller-manager [00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f] ...
	I0317 11:04:49.789175  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f"
	I0317 11:04:49.833437  255203 logs.go:123] Gathering logs for containerd ...
	I0317 11:04:49.833478  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:04:52.376428  255203 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0317 11:04:52.380160  255203 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0317 11:04:52.381121  255203 api_server.go:141] control plane version: v1.32.2
	I0317 11:04:52.381147  255203 api_server.go:131] duration metric: took 3.213930735s to wait for apiserver health ...
	I0317 11:04:52.381154  255203 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 11:04:52.381173  255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:04:52.381222  255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:04:52.415960  255203 cri.go:89] found id: "079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a"
	I0317 11:04:52.415982  255203 cri.go:89] found id: ""
	I0317 11:04:52.415991  255203 logs.go:282] 1 containers: [079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a]
	I0317 11:04:52.416048  255203 ssh_runner.go:195] Run: which crictl
	I0317 11:04:52.419718  255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:04:52.419772  255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:04:52.454820  255203 cri.go:89] found id: "1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc"
	I0317 11:04:52.454908  255203 cri.go:89] found id: ""
	I0317 11:04:52.454923  255203 logs.go:282] 1 containers: [1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc]
	I0317 11:04:52.454991  255203 ssh_runner.go:195] Run: which crictl
	I0317 11:04:52.459020  255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:04:52.459085  255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:04:52.491803  255203 cri.go:89] found id: ""
	I0317 11:04:52.491834  255203 logs.go:282] 0 containers: []
	W0317 11:04:52.491843  255203 logs.go:284] No container was found matching "coredns"
	I0317 11:04:52.491849  255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:04:52.491903  255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:04:52.526170  255203 cri.go:89] found id: "a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b"
	I0317 11:04:52.526199  255203 cri.go:89] found id: ""
	I0317 11:04:52.526209  255203 logs.go:282] 1 containers: [a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b]
	I0317 11:04:52.526272  255203 ssh_runner.go:195] Run: which crictl
	I0317 11:04:52.529827  255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:04:52.529903  255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:04:52.562281  255203 cri.go:89] found id: "a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e"
	I0317 11:04:52.562311  255203 cri.go:89] found id: ""
	I0317 11:04:52.562320  255203 logs.go:282] 1 containers: [a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e]
	I0317 11:04:52.562383  255203 ssh_runner.go:195] Run: which crictl
	I0317 11:04:52.565941  255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:04:52.566001  255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:04:52.598944  255203 cri.go:89] found id: "00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f"
	I0317 11:04:52.598971  255203 cri.go:89] found id: ""
	I0317 11:04:52.598982  255203 logs.go:282] 1 containers: [00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f]
	I0317 11:04:52.599044  255203 ssh_runner.go:195] Run: which crictl
	I0317 11:04:52.602585  255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:04:52.602649  255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:04:52.635595  255203 cri.go:89] found id: ""
	I0317 11:04:52.635617  255203 logs.go:282] 0 containers: []
	W0317 11:04:52.635626  255203 logs.go:284] No container was found matching "kindnet"
	I0317 11:04:52.635638  255203 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:04:52.635653  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:04:52.721412  255203 logs.go:123] Gathering logs for etcd [1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc] ...
	I0317 11:04:52.721442  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc"
	I0317 11:04:52.761651  255203 logs.go:123] Gathering logs for kube-scheduler [a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b] ...
	I0317 11:04:52.761685  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b"
	I0317 11:04:52.801775  255203 logs.go:123] Gathering logs for kube-controller-manager [00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f] ...
	I0317 11:04:52.801810  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f"
	I0317 11:04:52.848366  255203 logs.go:123] Gathering logs for containerd ...
	I0317 11:04:52.848401  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:04:52.891075  255203 logs.go:123] Gathering logs for container status ...
	I0317 11:04:52.891112  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:04:52.954106  255203 logs.go:123] Gathering logs for kube-apiserver [079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a] ...
	I0317 11:04:52.954142  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a"
	I0317 11:04:52.995653  255203 logs.go:123] Gathering logs for kube-proxy [a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e] ...
	I0317 11:04:52.995685  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e"
	I0317 11:04:53.032179  255203 logs.go:123] Gathering logs for kubelet ...
	I0317 11:04:53.032210  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:04:53.124349  255203 logs.go:123] Gathering logs for dmesg ...
	I0317 11:04:53.124385  255203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:04:49.913272  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:52.413818  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:55.841385  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:04:55.841404  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:04:55.841409  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:04:55.841416  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:04:55.841419  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:04:55.841421  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:04:55.841424  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:04:55.841426  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:04:55.841437  245681 retry.go:31] will retry after 19.064243371s: missing components: kube-dns
	I0317 11:04:54.149147  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:56.149457  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:55.650091  255203 system_pods.go:59] 8 kube-system pods found
	I0317 11:04:55.650133  255203 system_pods.go:61] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:04:55.650142  255203 system_pods.go:61] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:04:55.650153  255203 system_pods.go:61] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:04:55.650160  255203 system_pods.go:61] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:04:55.650166  255203 system_pods.go:61] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:04:55.650171  255203 system_pods.go:61] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:04:55.650177  255203 system_pods.go:61] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:04:55.650182  255203 system_pods.go:61] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:04:55.650191  255203 system_pods.go:74] duration metric: took 3.269030261s to wait for pod list to return data ...
	I0317 11:04:55.650201  255203 default_sa.go:34] waiting for default service account to be created ...
	I0317 11:04:55.652891  255203 default_sa.go:45] found service account: "default"
	I0317 11:04:55.652914  255203 default_sa.go:55] duration metric: took 2.706728ms for default service account to be created ...
	I0317 11:04:55.652921  255203 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 11:04:55.655394  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:04:55.655429  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:04:55.655437  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:04:55.655447  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:04:55.655452  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:04:55.655460  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:04:55.655464  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:04:55.655467  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:04:55.655473  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:04:55.655494  255203 retry.go:31] will retry after 201.27772ms: missing components: kube-dns
	I0317 11:04:55.861044  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:04:55.861077  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:04:55.861085  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:04:55.861094  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:04:55.861098  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:04:55.861103  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:04:55.861106  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:04:55.861109  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:04:55.861112  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:04:55.861126  255203 retry.go:31] will retry after 312.286943ms: missing components: kube-dns
	I0317 11:04:56.176707  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:04:56.176740  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:04:56.176746  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:04:56.176754  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:04:56.176758  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:04:56.176762  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:04:56.176765  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:04:56.176768  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:04:56.176771  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:04:56.176786  255203 retry.go:31] will retry after 421.052014ms: missing components: kube-dns
	I0317 11:04:56.602089  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:04:56.602121  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:04:56.602126  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:04:56.602134  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:04:56.602138  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:04:56.602142  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:04:56.602145  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:04:56.602151  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:04:56.602154  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:04:56.602166  255203 retry.go:31] will retry after 469.77104ms: missing components: kube-dns
	I0317 11:04:57.076461  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:04:57.076568  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:04:57.076587  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:04:57.076608  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:04:57.076629  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:04:57.076647  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:04:57.076662  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:04:57.076676  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:04:57.076690  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:04:57.076722  255203 retry.go:31] will retry after 656.119155ms: missing components: kube-dns
	I0317 11:04:57.736412  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:04:57.736456  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:04:57.736464  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:04:57.736474  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:04:57.736480  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:04:57.736486  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:04:57.736491  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:04:57.736497  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:04:57.736509  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:04:57.736526  255203 retry.go:31] will retry after 893.562069ms: missing components: kube-dns
	I0317 11:04:58.633942  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:04:58.633986  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:04:58.633995  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:04:58.634007  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:04:58.634014  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:04:58.634024  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:04:58.634033  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:04:58.634038  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:04:58.634043  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:04:58.634062  255203 retry.go:31] will retry after 1.122298923s: missing components: kube-dns
	I0317 11:04:54.913131  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:56.913269  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:59.413324  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:58.648543  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:00.649803  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:59.759953  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:04:59.759986  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:04:59.759992  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:04:59.759999  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:04:59.760004  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:04:59.760008  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:04:59.760011  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:04:59.760015  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:04:59.760018  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:04:59.760030  255203 retry.go:31] will retry after 1.218511595s: missing components: kube-dns
	I0317 11:05:00.982785  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:00.982829  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:00.982838  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:05:00.982845  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:00.982849  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:05:00.982854  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:05:00.982857  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:05:00.982861  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:05:00.982865  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:05:00.982880  255203 retry.go:31] will retry after 1.171774567s: missing components: kube-dns
	I0317 11:05:02.158314  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:02.158348  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:02.158354  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:05:02.158360  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:02.158364  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:05:02.158368  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:05:02.158372  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:05:02.158376  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:05:02.158379  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:05:02.158391  255203 retry.go:31] will retry after 1.696837803s: missing components: kube-dns
	I0317 11:05:03.858863  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:03.858900  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:03.858906  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:05:03.858915  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:03.858919  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:05:03.858926  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:05:03.858930  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:05:03.858933  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:05:03.858936  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:05:03.858949  255203 retry.go:31] will retry after 2.428655233s: missing components: kube-dns
	I0317 11:05:01.414244  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:03.915020  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:03.149068  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:05.648866  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:06.291480  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:06.291513  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:06.291519  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:05:06.291528  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:06.291532  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:05:06.291537  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:05:06.291540  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:05:06.291543  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:05:06.291546  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:05:06.291561  255203 retry.go:31] will retry after 2.373974056s: missing components: kube-dns
	I0317 11:05:08.669149  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:08.669185  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:08.669191  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:05:08.669198  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:08.669202  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:05:08.669207  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:05:08.669210  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:05:08.669214  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:05:08.669217  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:05:08.669231  255203 retry.go:31] will retry after 2.902944154s: missing components: kube-dns
	I0317 11:05:06.413297  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:08.913574  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:07.649064  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:09.649491  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:11.149005  261225 pod_ready.go:82] duration metric: took 4m0.005124542s for pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace to be "Ready" ...
	E0317 11:05:11.149032  261225 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0317 11:05:11.149044  261225 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-wht7f" in "kube-system" namespace to be "Ready" ...
	I0317 11:05:11.150773  261225 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-wht7f" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-wht7f" not found
	I0317 11:05:11.150799  261225 pod_ready.go:82] duration metric: took 1.746139ms for pod "coredns-668d6bf9bc-wht7f" in "kube-system" namespace to be "Ready" ...
	E0317 11:05:11.150812  261225 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-wht7f" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-wht7f" not found
	I0317 11:05:11.150820  261225 pod_ready.go:79] waiting up to 15m0s for pod "etcd-kindnet-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:05:11.154478  261225 pod_ready.go:93] pod "etcd-kindnet-236437" in "kube-system" namespace has status "Ready":"True"
	I0317 11:05:11.154495  261225 pod_ready.go:82] duration metric: took 3.667556ms for pod "etcd-kindnet-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:05:11.154505  261225 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-kindnet-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:05:11.158180  261225 pod_ready.go:93] pod "kube-apiserver-kindnet-236437" in "kube-system" namespace has status "Ready":"True"
	I0317 11:05:11.158198  261225 pod_ready.go:82] duration metric: took 3.686563ms for pod "kube-apiserver-kindnet-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:05:11.158206  261225 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-kindnet-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:05:11.161883  261225 pod_ready.go:93] pod "kube-controller-manager-kindnet-236437" in "kube-system" namespace has status "Ready":"True"
	I0317 11:05:11.161902  261225 pod_ready.go:82] duration metric: took 3.688883ms for pod "kube-controller-manager-kindnet-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:05:11.161912  261225 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-sr64l" in "kube-system" namespace to be "Ready" ...
	I0317 11:05:11.347703  261225 pod_ready.go:93] pod "kube-proxy-sr64l" in "kube-system" namespace has status "Ready":"True"
	I0317 11:05:11.347728  261225 pod_ready.go:82] duration metric: took 185.808929ms for pod "kube-proxy-sr64l" in "kube-system" namespace to be "Ready" ...
	I0317 11:05:11.347737  261225 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-kindnet-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:05:11.748058  261225 pod_ready.go:93] pod "kube-scheduler-kindnet-236437" in "kube-system" namespace has status "Ready":"True"
	I0317 11:05:11.748080  261225 pod_ready.go:82] duration metric: took 400.336874ms for pod "kube-scheduler-kindnet-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:05:11.748088  261225 pod_ready.go:39] duration metric: took 4m0.610767407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:05:11.748109  261225 api_server.go:52] waiting for apiserver process to appear ...
	I0317 11:05:11.748151  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:05:11.748204  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:05:11.782166  261225 cri.go:89] found id: "8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5"
	I0317 11:05:11.782194  261225 cri.go:89] found id: ""
	I0317 11:05:11.782202  261225 logs.go:282] 1 containers: [8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5]
	I0317 11:05:11.782250  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:11.785774  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:05:11.785828  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:05:11.818679  261225 cri.go:89] found id: "23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9"
	I0317 11:05:11.818709  261225 cri.go:89] found id: ""
	I0317 11:05:11.818718  261225 logs.go:282] 1 containers: [23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9]
	I0317 11:05:11.818773  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:11.822242  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:05:11.822313  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:05:11.855724  261225 cri.go:89] found id: ""
	I0317 11:05:11.855749  261225 logs.go:282] 0 containers: []
	W0317 11:05:11.855757  261225 logs.go:284] No container was found matching "coredns"
	I0317 11:05:11.855762  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:05:11.855840  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:05:11.889868  261225 cri.go:89] found id: "e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997"
	I0317 11:05:11.889895  261225 cri.go:89] found id: ""
	I0317 11:05:11.889905  261225 logs.go:282] 1 containers: [e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997]
	I0317 11:05:11.889968  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:11.893455  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:05:11.893528  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:05:11.930185  261225 cri.go:89] found id: "97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7"
	I0317 11:05:11.930215  261225 cri.go:89] found id: ""
	I0317 11:05:11.930226  261225 logs.go:282] 1 containers: [97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7]
	I0317 11:05:11.930281  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:11.934085  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:05:11.934163  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:05:11.969461  261225 cri.go:89] found id: "26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405"
	I0317 11:05:11.969486  261225 cri.go:89] found id: ""
	I0317 11:05:11.969495  261225 logs.go:282] 1 containers: [26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405]
	I0317 11:05:11.969554  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:11.973137  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:05:11.973221  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:05:12.007038  261225 cri.go:89] found id: ""
	I0317 11:05:12.007061  261225 logs.go:282] 0 containers: []
	W0317 11:05:12.007068  261225 logs.go:284] No container was found matching "kindnet"
	I0317 11:05:12.007082  261225 logs.go:123] Gathering logs for dmesg ...
	I0317 11:05:12.007094  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:05:12.027405  261225 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:05:12.027439  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:05:12.114815  261225 logs.go:123] Gathering logs for kube-scheduler [e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997] ...
	I0317 11:05:12.114845  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997"
	I0317 11:05:12.157696  261225 logs.go:123] Gathering logs for kube-proxy [97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7] ...
	I0317 11:05:12.157731  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7"
	I0317 11:05:12.195338  261225 logs.go:123] Gathering logs for containerd ...
	I0317 11:05:12.195366  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:05:11.576191  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:11.576220  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:11.576226  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:05:11.576233  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:11.576237  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:05:11.576241  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:05:11.576244  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:05:11.576248  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:05:11.576250  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:05:11.576262  255203 retry.go:31] will retry after 5.178275462s: missing components: kube-dns
	I0317 11:05:11.413836  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:13.914144  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:14.909964  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:05:14.909983  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:14.909990  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:05:14.909996  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:14.909999  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:05:14.910002  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:05:14.910004  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:05:14.910009  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:05:14.910021  245681 retry.go:31] will retry after 17.363957253s: missing components: kube-dns
	I0317 11:05:12.239939  261225 logs.go:123] Gathering logs for kubelet ...
	I0317 11:05:12.239978  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:05:12.332451  261225 logs.go:123] Gathering logs for kube-apiserver [8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5] ...
	I0317 11:05:12.332491  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5"
	I0317 11:05:12.375771  261225 logs.go:123] Gathering logs for etcd [23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9] ...
	I0317 11:05:12.375804  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9"
	I0317 11:05:12.416166  261225 logs.go:123] Gathering logs for kube-controller-manager [26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405] ...
	I0317 11:05:12.416200  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405"
	I0317 11:05:12.467570  261225 logs.go:123] Gathering logs for container status ...
	I0317 11:05:12.467603  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:05:15.008253  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 11:05:15.020269  261225 api_server.go:72] duration metric: took 4m4.739086442s to wait for apiserver process to appear ...
	I0317 11:05:15.020303  261225 api_server.go:88] waiting for apiserver healthz status ...
	I0317 11:05:15.020339  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:05:15.020402  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:05:15.054066  261225 cri.go:89] found id: "8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5"
	I0317 11:05:15.054088  261225 cri.go:89] found id: ""
	I0317 11:05:15.054096  261225 logs.go:282] 1 containers: [8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5]
	I0317 11:05:15.054147  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:15.057724  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:05:15.057783  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:05:15.090544  261225 cri.go:89] found id: "23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9"
	I0317 11:05:15.090565  261225 cri.go:89] found id: ""
	I0317 11:05:15.090572  261225 logs.go:282] 1 containers: [23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9]
	I0317 11:05:15.090614  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:15.094062  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:05:15.094127  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:05:15.132281  261225 cri.go:89] found id: ""
	I0317 11:05:15.132308  261225 logs.go:282] 0 containers: []
	W0317 11:05:15.132319  261225 logs.go:284] No container was found matching "coredns"
	I0317 11:05:15.132327  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:05:15.132383  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:05:15.166781  261225 cri.go:89] found id: "e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997"
	I0317 11:05:15.166825  261225 cri.go:89] found id: ""
	I0317 11:05:15.166835  261225 logs.go:282] 1 containers: [e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997]
	I0317 11:05:15.166893  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:15.170624  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:05:15.170690  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:05:15.203912  261225 cri.go:89] found id: "97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7"
	I0317 11:05:15.203939  261225 cri.go:89] found id: ""
	I0317 11:05:15.203950  261225 logs.go:282] 1 containers: [97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7]
	I0317 11:05:15.204008  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:15.207632  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:05:15.207715  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:05:15.241079  261225 cri.go:89] found id: "26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405"
	I0317 11:05:15.241106  261225 cri.go:89] found id: ""
	I0317 11:05:15.241117  261225 logs.go:282] 1 containers: [26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405]
	I0317 11:05:15.241174  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:15.244691  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:05:15.244758  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:05:15.280054  261225 cri.go:89] found id: ""
	I0317 11:05:15.280078  261225 logs.go:282] 0 containers: []
	W0317 11:05:15.280086  261225 logs.go:284] No container was found matching "kindnet"
	I0317 11:05:15.280099  261225 logs.go:123] Gathering logs for kube-apiserver [8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5] ...
	I0317 11:05:15.280111  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5"
	I0317 11:05:15.321837  261225 logs.go:123] Gathering logs for kube-scheduler [e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997] ...
	I0317 11:05:15.321870  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997"
	I0317 11:05:15.364421  261225 logs.go:123] Gathering logs for kube-proxy [97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7] ...
	I0317 11:05:15.364456  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7"
	I0317 11:05:15.398977  261225 logs.go:123] Gathering logs for kube-controller-manager [26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405] ...
	I0317 11:05:15.399005  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405"
	I0317 11:05:15.449068  261225 logs.go:123] Gathering logs for containerd ...
	I0317 11:05:15.449101  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:05:15.495271  261225 logs.go:123] Gathering logs for kubelet ...
	I0317 11:05:15.495313  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:05:15.584229  261225 logs.go:123] Gathering logs for dmesg ...
	I0317 11:05:15.584269  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:05:15.603621  261225 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:05:15.603651  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:05:15.689841  261225 logs.go:123] Gathering logs for etcd [23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9] ...
	I0317 11:05:15.689875  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9"
	I0317 11:05:15.731335  261225 logs.go:123] Gathering logs for container status ...
	I0317 11:05:15.731369  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:05:16.758565  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:16.758597  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:16.758603  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:05:16.758610  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:16.758616  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:05:16.758622  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:05:16.758625  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:05:16.758629  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:05:16.758633  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:05:16.758647  255203 retry.go:31] will retry after 4.630324475s: missing components: kube-dns
	I0317 11:05:16.413215  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:18.413764  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:18.269898  261225 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0317 11:05:18.274573  261225 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0317 11:05:18.275657  261225 api_server.go:141] control plane version: v1.32.2
	I0317 11:05:18.275685  261225 api_server.go:131] duration metric: took 3.255374368s to wait for apiserver health ...
	I0317 11:05:18.275696  261225 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 11:05:18.275723  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:05:18.275782  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:05:18.308555  261225 cri.go:89] found id: "8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5"
	I0317 11:05:18.308574  261225 cri.go:89] found id: ""
	I0317 11:05:18.308581  261225 logs.go:282] 1 containers: [8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5]
	I0317 11:05:18.308628  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:18.311845  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:05:18.311901  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:05:18.344040  261225 cri.go:89] found id: "23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9"
	I0317 11:05:18.344062  261225 cri.go:89] found id: ""
	I0317 11:05:18.344079  261225 logs.go:282] 1 containers: [23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9]
	I0317 11:05:18.344138  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:18.347489  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:05:18.347549  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:05:18.382251  261225 cri.go:89] found id: ""
	I0317 11:05:18.382272  261225 logs.go:282] 0 containers: []
	W0317 11:05:18.382280  261225 logs.go:284] No container was found matching "coredns"
	I0317 11:05:18.382286  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:05:18.382340  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:05:18.416712  261225 cri.go:89] found id: "e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997"
	I0317 11:05:18.416729  261225 cri.go:89] found id: ""
	I0317 11:05:18.416736  261225 logs.go:282] 1 containers: [e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997]
	I0317 11:05:18.416777  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:18.420319  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:05:18.420397  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:05:18.454494  261225 cri.go:89] found id: "97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7"
	I0317 11:05:18.454520  261225 cri.go:89] found id: ""
	I0317 11:05:18.454539  261225 logs.go:282] 1 containers: [97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7]
	I0317 11:05:18.454594  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:18.457995  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:05:18.458063  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:05:18.490148  261225 cri.go:89] found id: "26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405"
	I0317 11:05:18.490167  261225 cri.go:89] found id: ""
	I0317 11:05:18.490174  261225 logs.go:282] 1 containers: [26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405]
	I0317 11:05:18.490225  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:18.493459  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:05:18.493515  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:05:18.525609  261225 cri.go:89] found id: ""
	I0317 11:05:18.525633  261225 logs.go:282] 0 containers: []
	W0317 11:05:18.525644  261225 logs.go:284] No container was found matching "kindnet"
	I0317 11:05:18.525661  261225 logs.go:123] Gathering logs for kubelet ...
	I0317 11:05:18.525676  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:05:18.611130  261225 logs.go:123] Gathering logs for dmesg ...
	I0317 11:05:18.611164  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:05:18.629424  261225 logs.go:123] Gathering logs for kube-apiserver [8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5] ...
	I0317 11:05:18.629451  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5"
	I0317 11:05:18.668784  261225 logs.go:123] Gathering logs for kube-scheduler [e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997] ...
	I0317 11:05:18.668814  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997"
	I0317 11:05:18.707925  261225 logs.go:123] Gathering logs for kube-proxy [97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7] ...
	I0317 11:05:18.707953  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7"
	I0317 11:05:18.745255  261225 logs.go:123] Gathering logs for kube-controller-manager [26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405] ...
	I0317 11:05:18.745282  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405"
	I0317 11:05:18.792139  261225 logs.go:123] Gathering logs for containerd ...
	I0317 11:05:18.792168  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:05:18.837395  261225 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:05:18.837426  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:05:18.927307  261225 logs.go:123] Gathering logs for etcd [23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9] ...
	I0317 11:05:18.927334  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9"
	I0317 11:05:18.970538  261225 logs.go:123] Gathering logs for container status ...
	I0317 11:05:18.970572  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:05:21.510656  261225 system_pods.go:59] 8 kube-system pods found
	I0317 11:05:21.510691  261225 system_pods.go:61] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:21.510697  261225 system_pods.go:61] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:21.510704  261225 system_pods.go:61] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:21.510711  261225 system_pods.go:61] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:21.510715  261225 system_pods.go:61] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:21.510718  261225 system_pods.go:61] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:21.510722  261225 system_pods.go:61] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:21.510725  261225 system_pods.go:61] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:21.510731  261225 system_pods.go:74] duration metric: took 3.235029547s to wait for pod list to return data ...
	I0317 11:05:21.510740  261225 default_sa.go:34] waiting for default service account to be created ...
	I0317 11:05:21.513446  261225 default_sa.go:45] found service account: "default"
	I0317 11:05:21.513476  261225 default_sa.go:55] duration metric: took 2.728168ms for default service account to be created ...
	I0317 11:05:21.513489  261225 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 11:05:21.516171  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:21.516197  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:21.516205  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:21.516212  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:21.516216  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:21.516220  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:21.516223  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:21.516226  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:21.516228  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:21.516246  261225 retry.go:31] will retry after 304.55093ms: missing components: kube-dns
	I0317 11:05:21.824952  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:21.824993  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:21.825002  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:21.825013  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:21.825018  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:21.825022  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:21.825026  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:21.825031  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:21.825036  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:21.825057  261225 retry.go:31] will retry after 301.434218ms: missing components: kube-dns
	I0317 11:05:22.131409  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:22.131455  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:22.131469  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:22.131481  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:22.131487  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:22.131495  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:22.131506  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:22.131511  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:22.131516  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:22.131533  261225 retry.go:31] will retry after 479.197877ms: missing components: kube-dns
	I0317 11:05:21.393756  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:21.393798  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:21.393807  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:05:21.393821  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:21.393831  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:05:21.393902  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:05:21.393930  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:05:21.393940  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:05:21.393945  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:05:21.393967  255203 retry.go:31] will retry after 5.810224129s: missing components: kube-dns
	I0317 11:05:20.913030  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:23.413886  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:22.613878  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:22.613913  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:22.613921  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:22.613929  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:22.613935  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:22.613941  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:22.613946  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:22.613953  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:22.613958  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:22.613976  261225 retry.go:31] will retry after 442.216978ms: missing components: kube-dns
	I0317 11:05:23.059458  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:23.059488  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:23.059494  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:23.059501  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:23.059506  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:23.059512  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:23.059517  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:23.059522  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:23.059530  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:23.059547  261225 retry.go:31] will retry after 657.88959ms: missing components: kube-dns
	I0317 11:05:23.721630  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:23.721665  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:23.721673  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:23.721681  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:23.721687  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:23.721693  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:23.721698  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:23.721703  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:23.721712  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:23.721731  261225 retry.go:31] will retry after 610.04653ms: missing components: kube-dns
	I0317 11:05:24.335549  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:24.335592  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:24.335603  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:24.335612  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:24.335616  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:24.335623  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:24.335630  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:24.335640  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:24.335647  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:24.335663  261225 retry.go:31] will retry after 985.298595ms: missing components: kube-dns
	I0317 11:05:25.325186  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:25.325217  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:25.325223  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:25.325230  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:25.325234  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:25.325238  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:25.325241  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:25.325244  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:25.325247  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:25.325259  261225 retry.go:31] will retry after 980.725261ms: missing components: kube-dns
	I0317 11:05:26.309421  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:26.309457  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:26.309465  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:26.309475  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:26.309483  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:26.309494  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:26.309505  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:26.309512  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:26.309518  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:26.309537  261225 retry.go:31] will retry after 1.123138561s: missing components: kube-dns
	I0317 11:05:27.208820  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:27.208855  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:27.208862  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:05:27.208871  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:27.208877  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:05:27.208883  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:05:27.208887  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:05:27.208892  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:05:27.208898  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:05:27.208916  255203 retry.go:31] will retry after 8.348805555s: missing components: kube-dns
	I0317 11:05:25.912638  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:27.913452  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:27.436613  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:27.436643  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:27.436649  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:27.436657  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:27.436662  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:27.436668  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:27.436674  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:27.436679  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:27.436684  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:27.436702  261225 retry.go:31] will retry after 1.57268651s: missing components: kube-dns
	I0317 11:05:29.012826  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:29.012864  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:29.012872  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:29.012882  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:29.012888  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:29.012894  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:29.012898  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:29.012903  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:29.012908  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:29.012925  261225 retry.go:31] will retry after 2.671867502s: missing components: kube-dns
	I0317 11:05:31.689143  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:31.689181  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:31.689189  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:31.689199  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:31.689205  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:31.689211  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:31.689216  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:31.689222  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:31.689227  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:31.689246  261225 retry.go:31] will retry after 3.255293189s: missing components: kube-dns
	I0317 11:05:30.412494  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:32.412901  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:32.277700  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:05:32.277724  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:32.277731  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:05:32.277740  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:32.277744  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:05:32.277748  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:05:32.277750  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:05:32.277752  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:05:32.277766  245681 retry.go:31] will retry after 23.285243045s: missing components: kube-dns
	I0317 11:05:34.948821  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:34.948853  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:34.948859  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:34.948866  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:34.948871  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:34.948875  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:34.948878  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:34.948882  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:34.948886  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:34.948899  261225 retry.go:31] will retry after 3.968980109s: missing components: kube-dns
	I0317 11:05:35.562294  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:35.562334  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:35.562342  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:05:35.562351  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:35.562355  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:05:35.562359  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:05:35.562362  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:05:35.562365  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:05:35.562369  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:05:35.562382  255203 retry.go:31] will retry after 10.54807244s: missing components: kube-dns
	I0317 11:05:34.912822  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:37.412649  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:38.922353  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:38.922385  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:38.922391  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:38.922399  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:38.922403  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:38.922407  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:38.922411  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:38.922414  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:38.922418  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:38.922432  261225 retry.go:31] will retry after 4.763605942s: missing components: kube-dns
	I0317 11:05:39.912457  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:41.912831  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:44.412502  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:43.690391  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:43.690433  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:43.690442  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:43.690454  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:43.690461  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:43.690470  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:43.690479  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:43.690487  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:43.690491  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:43.690509  261225 retry.go:31] will retry after 5.467335218s: missing components: kube-dns
	I0317 11:05:46.114496  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:46.114535  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:46.114541  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:05:46.114548  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:46.114552  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:05:46.114556  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:05:46.114559  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:05:46.114563  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:05:46.114565  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:05:46.114581  255203 retry.go:31] will retry after 15.508572932s: missing components: kube-dns
	I0317 11:05:46.913439  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:48.913558  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:49.162254  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:49.162287  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:49.162293  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:49.162300  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:49.162303  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:49.162309  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:49.162312  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:49.162317  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:49.162321  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:49.162334  261225 retry.go:31] will retry after 5.883169741s: missing components: kube-dns
	I0317 11:05:51.412685  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:53.413783  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:55.566604  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:05:55.566623  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:55.566627  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:05:55.566634  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:55.566640  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:05:55.566643  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:05:55.566646  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:05:55.566648  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:05:55.566661  245681 retry.go:31] will retry after 29.32259174s: missing components: kube-dns
	I0317 11:05:55.050444  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:55.050483  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:55.050491  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:55.050501  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:55.050507  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:55.050513  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:55.050516  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:55.050520  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:55.050526  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:55.050545  261225 retry.go:31] will retry after 9.352777192s: missing components: kube-dns
	I0317 11:05:55.913043  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:58.412339  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:01.626663  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:06:01.626695  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:06:01.626700  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:06:01.626708  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:06:01.626712  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:06:01.626716  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:06:01.626720  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:06:01.626723  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:06:01.626726  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:06:01.626740  255203 retry.go:31] will retry after 20.504309931s: missing components: kube-dns
	I0317 11:06:00.412947  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:02.912700  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:04.407436  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:06:04.407468  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:06:04.407473  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:06:04.407481  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:06:04.407485  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:06:04.407490  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:06:04.407493  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:06:04.407497  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:06:04.407500  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:06:04.407513  261225 retry.go:31] will retry after 9.592726834s: missing components: kube-dns
	I0317 11:06:04.913636  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:07.413198  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:09.413596  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:11.912609  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:13.913341  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:14.003835  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:06:14.003876  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:06:14.003884  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:06:14.003894  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:06:14.003897  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:06:14.003902  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:06:14.003905  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:06:14.003908  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:06:14.003911  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:06:14.003926  261225 retry.go:31] will retry after 15.514429293s: missing components: kube-dns
	I0317 11:06:16.412593  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:18.913785  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:22.134045  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:06:22.134078  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:06:22.134083  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:06:22.134091  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:06:22.134095  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:06:22.134099  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:06:22.134105  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:06:22.134108  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:06:22.134111  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:06:22.134125  255203 retry.go:31] will retry after 23.428586225s: missing components: kube-dns
	I0317 11:06:21.412772  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:23.412952  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:24.894075  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:06:24.894098  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:06:24.894103  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:06:24.894109  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:06:24.894111  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:06:24.894114  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:06:24.894117  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:06:24.894119  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:06:24.894131  245681 retry.go:31] will retry after 43.021190015s: missing components: kube-dns
	I0317 11:06:25.912643  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:27.913773  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:29.522530  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:06:29.522566  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:06:29.522573  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:06:29.522582  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:06:29.522588  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:06:29.522594  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:06:29.522604  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:06:29.522609  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:06:29.522615  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:06:29.522635  261225 retry.go:31] will retry after 19.290967428s: missing components: kube-dns
	I0317 11:06:30.412571  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:32.412732  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:34.913454  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:37.412555  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:39.412879  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:41.413880  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:43.913261  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:45.566284  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:06:45.566316  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:06:45.566325  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:06:45.566333  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:06:45.566337  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:06:45.566341  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:06:45.566344  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:06:45.566349  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:06:45.566352  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:06:45.566365  255203 retry.go:31] will retry after 32.86473348s: missing components: kube-dns
	I0317 11:06:45.913636  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:48.412720  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:48.816926  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:06:48.816957  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:06:48.816963  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:06:48.816971  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:06:48.816978  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:06:48.816981  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:06:48.816985  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:06:48.816988  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:06:48.816991  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:06:48.817004  261225 retry.go:31] will retry after 26.212373787s: missing components: kube-dns
	I0317 11:06:49.912504  271403 pod_ready.go:82] duration metric: took 4m0.004506039s for pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace to be "Ready" ...
	E0317 11:06:49.912527  271403 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0317 11:06:49.912535  271403 pod_ready.go:79] waiting up to 15m0s for pod "calico-node-ks7vr" in "kube-system" namespace to be "Ready" ...
	I0317 11:06:51.918374  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:53.918973  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:56.418241  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:58.418488  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:00.918359  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:03.417624  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:05.418024  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:07.918973  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:07.920403  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:07:07.920426  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:07:07.920434  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:07:07.920443  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:07:07.920448  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:07:07.920452  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:07:07.920456  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:07:07.920459  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:07:07.920475  245681 retry.go:31] will retry after 53.923427957s: missing components: kube-dns
	I0317 11:07:10.418272  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:12.918795  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:15.034531  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:07:15.034568  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:07:15.034577  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:07:15.034588  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:07:15.034594  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:07:15.034600  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:07:15.034608  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:07:15.034619  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:07:15.034624  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:07:15.034641  261225 retry.go:31] will retry after 28.925757751s: missing components: kube-dns
	I0317 11:07:18.436444  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:07:18.436481  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:07:18.436488  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:07:18.436497  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:07:18.436504  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:07:18.436508  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:07:18.436512  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:07:18.436515  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:07:18.436518  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:07:18.436534  255203 retry.go:31] will retry after 27.149619295s: missing components: kube-dns
	I0317 11:07:15.418597  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:17.917419  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:19.918049  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:21.918844  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:24.417950  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:26.918233  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:29.417309  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:31.418245  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:33.418716  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:35.918570  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:38.417408  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:40.417958  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:42.918827  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:43.964848  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:07:43.964882  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:07:43.964889  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:07:43.964898  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:07:43.964903  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:07:43.964907  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:07:43.964911  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:07:43.964914  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:07:43.964917  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:07:43.964929  261225 retry.go:31] will retry after 31.458446993s: missing components: kube-dns
	I0317 11:07:45.589503  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:07:45.589536  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:07:45.589543  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:07:45.589554  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:07:45.589560  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:07:45.589567  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:07:45.589573  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:07:45.589580  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:07:45.589586  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:07:45.589608  255203 retry.go:31] will retry after 36.355329469s: missing components: kube-dns
	I0317 11:07:45.418196  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:47.919273  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:50.417016  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:52.418046  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:54.418176  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:56.918881  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:59.417841  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:01.418036  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:03.917623  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:01.847931  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:08:01.847956  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:08:01.847962  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:08:01.847970  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:08:01.847974  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:08:01.847977  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:08:01.847980  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:08:01.847982  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:08:01.847996  245681 retry.go:31] will retry after 1m1.058602694s: missing components: kube-dns
	I0317 11:08:05.918610  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:08.417523  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:10.417892  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:12.418404  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:15.427421  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:08:15.427462  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:08:15.427472  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:08:15.427483  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:08:15.427489  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:08:15.427495  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:08:15.427500  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:08:15.427505  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:08:15.427509  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:08:15.427522  261225 retry.go:31] will retry after 32.96114545s: missing components: kube-dns
	I0317 11:08:14.918113  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:17.417020  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:19.417791  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:21.949272  255203 system_pods.go:86] 8 kube-system pods found
	I0317 11:08:21.949312  255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:08:21.949321  255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
	I0317 11:08:21.949330  255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:08:21.949335  255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
	I0317 11:08:21.949341  255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
	I0317 11:08:21.949347  255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
	I0317 11:08:21.949356  255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
	I0317 11:08:21.949361  255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
	I0317 11:08:21.949381  255203 retry.go:31] will retry after 52.503914166s: missing components: kube-dns
	I0317 11:08:21.917617  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:23.917816  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:25.917904  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:28.417956  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:30.917837  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:32.918089  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:34.918753  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:37.417237  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:39.919014  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:42.417529  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:44.418297  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:46.918258  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:48.918688  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:48.392505  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:08:48.392536  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:08:48.392542  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:08:48.392549  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:08:48.392555  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:08:48.392561  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:08:48.392566  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:08:48.392571  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:08:48.392579  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:08:48.392597  261225 retry.go:31] will retry after 40.97829734s: missing components: kube-dns
	I0317 11:08:51.417307  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:53.418355  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:55.918117  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:57.918410  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:02.910188  245681 system_pods.go:86] 7 kube-system pods found
	I0317 11:09:02.910217  245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:09:02.910222  245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
	I0317 11:09:02.910229  245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:09:02.910231  245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
	I0317 11:09:02.910234  245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
	I0317 11:09:02.910236  245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
	I0317 11:09:02.910238  245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
	I0317 11:09:02.912217  245681 out.go:201] 
	W0317 11:09:02.913584  245681 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W0317 11:09:02.913604  245681 out.go:270] * 
	W0317 11:09:02.914653  245681 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0317 11:09:02.916068  245681 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	491cadd11c003       f1332858868e1       9 minutes ago       Running             kube-proxy                0                   5195a921f9a59       kube-proxy-lmh8d
	d003c8e8dced3       d8e673e7c9983       9 minutes ago       Running             kube-scheduler            0                   17ddba14c205c       kube-scheduler-pause-507725
	d870fd4dffe56       85b7a174738ba       9 minutes ago       Running             kube-apiserver            0                   b34936203cd4e       kube-apiserver-pause-507725
	80ceacde36f32       b6a454c5a800d       9 minutes ago       Running             kube-controller-manager   0                   fc94d7ad8d77b       kube-controller-manager-pause-507725
	5f8e66af286f7       a9e7e6b294baf       9 minutes ago       Running             etcd                      0                   2af839fc0332d       etcd-pause-507725
	
	
	==> containerd <==
	Mar 17 11:06:22 pause-507725 containerd[875]: time="2025-03-17T11:06:22.125265986Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ba91e813d618fed827b357e412ac5ea1bac03faf8d25fd213ec6681ba02a3a43\": failed to find network info for sandbox \"ba91e813d618fed827b357e412ac5ea1bac03faf8d25fd213ec6681ba02a3a43\""
	Mar 17 11:06:33 pause-507725 containerd[875]: time="2025-03-17T11:06:33.107114133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:06:33 pause-507725 containerd[875]: time="2025-03-17T11:06:33.125571625Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"03669b507fabefd7a7439c72050b16b52dd1c24e68f434a6895a5006b3a0b19d\": failed to find network info for sandbox \"03669b507fabefd7a7439c72050b16b52dd1c24e68f434a6895a5006b3a0b19d\""
	Mar 17 11:06:49 pause-507725 containerd[875]: time="2025-03-17T11:06:49.106897888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:06:49 pause-507725 containerd[875]: time="2025-03-17T11:06:49.125026596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"07f7e01a84e49f010c2d20f74fe4d30f6273f24782bb9d342ae01aeb474367eb\": failed to find network info for sandbox \"07f7e01a84e49f010c2d20f74fe4d30f6273f24782bb9d342ae01aeb474367eb\""
	Mar 17 11:07:01 pause-507725 containerd[875]: time="2025-03-17T11:07:01.107832893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:07:01 pause-507725 containerd[875]: time="2025-03-17T11:07:01.127232587Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e74da7172b1e465c7c90b97db8563f5ca208449d7cdb777442a8f65a427ca150\": failed to find network info for sandbox \"e74da7172b1e465c7c90b97db8563f5ca208449d7cdb777442a8f65a427ca150\""
	Mar 17 11:07:16 pause-507725 containerd[875]: time="2025-03-17T11:07:16.108053001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:07:16 pause-507725 containerd[875]: time="2025-03-17T11:07:16.127421346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"87662ee9ccf85daadaa5ed7ab6488905433f3c2f57b53e9682dd13ccff3208d3\": failed to find network info for sandbox \"87662ee9ccf85daadaa5ed7ab6488905433f3c2f57b53e9682dd13ccff3208d3\""
	Mar 17 11:07:28 pause-507725 containerd[875]: time="2025-03-17T11:07:28.107127360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:07:28 pause-507725 containerd[875]: time="2025-03-17T11:07:28.126458750Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1e57c4aea1367ddef178fbffea911b2ab1f61a35f077325a52c43bff5c4e3744\": failed to find network info for sandbox \"1e57c4aea1367ddef178fbffea911b2ab1f61a35f077325a52c43bff5c4e3744\""
	Mar 17 11:07:39 pause-507725 containerd[875]: time="2025-03-17T11:07:39.107512780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:07:39 pause-507725 containerd[875]: time="2025-03-17T11:07:39.127005681Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a9b7dd1f2a06d86dd7856395b32e4178006ac8b9c52903e1080c869be1a0d77e\": failed to find network info for sandbox \"a9b7dd1f2a06d86dd7856395b32e4178006ac8b9c52903e1080c869be1a0d77e\""
	Mar 17 11:07:53 pause-507725 containerd[875]: time="2025-03-17T11:07:53.107399913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:07:53 pause-507725 containerd[875]: time="2025-03-17T11:07:53.125483050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e95094bb67a9a22a07d4e8b3eebfd630e0c45391cbffc4d04900540d845e5d10\": failed to find network info for sandbox \"e95094bb67a9a22a07d4e8b3eebfd630e0c45391cbffc4d04900540d845e5d10\""
	Mar 17 11:08:06 pause-507725 containerd[875]: time="2025-03-17T11:08:06.107314732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:08:06 pause-507725 containerd[875]: time="2025-03-17T11:08:06.126209721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2cece04d25a506ff93fdb0bff78098cd9b5c3b3815494653e16549b9f79ceff1\": failed to find network info for sandbox \"2cece04d25a506ff93fdb0bff78098cd9b5c3b3815494653e16549b9f79ceff1\""
	Mar 17 11:08:20 pause-507725 containerd[875]: time="2025-03-17T11:08:20.109255755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:08:20 pause-507725 containerd[875]: time="2025-03-17T11:08:20.129381467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a5f327ecf216a3eb3b48af776e1ca22118e271e2207cd080720fc402766a6c7f\": failed to find network info for sandbox \"a5f327ecf216a3eb3b48af776e1ca22118e271e2207cd080720fc402766a6c7f\""
	Mar 17 11:08:33 pause-507725 containerd[875]: time="2025-03-17T11:08:33.107167526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:08:33 pause-507725 containerd[875]: time="2025-03-17T11:08:33.126559875Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"20894be95d7ab83c1e866a46e8247255c9379c653ced69b9b82f83241591da60\": failed to find network info for sandbox \"20894be95d7ab83c1e866a46e8247255c9379c653ced69b9b82f83241591da60\""
	Mar 17 11:08:46 pause-507725 containerd[875]: time="2025-03-17T11:08:46.107017109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:08:46 pause-507725 containerd[875]: time="2025-03-17T11:08:46.124762851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9def6658e512ac91e511594d5f26465fbe6c137e614b1425b6b3a299fbfd477a\": failed to find network info for sandbox \"9def6658e512ac91e511594d5f26465fbe6c137e614b1425b6b3a299fbfd477a\""
	Mar 17 11:08:57 pause-507725 containerd[875]: time="2025-03-17T11:08:57.107877216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:08:57 pause-507725 containerd[875]: time="2025-03-17T11:08:57.127117051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"691a0098b203b807285d7b05c186b3d83b02c7ce230429ecd7c45dae277e7439\": failed to find network info for sandbox \"691a0098b203b807285d7b05c186b3d83b02c7ce230429ecd7c45dae277e7439\""
	
	
	==> describe nodes <==
	Name:               pause-507725
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-507725
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76
	                    minikube.k8s.io/name=pause-507725
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_03_17T10_59_30_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 10:59:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-507725
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 11:09:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 11:07:08 +0000   Mon, 17 Mar 2025 10:59:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 11:07:08 +0000   Mon, 17 Mar 2025 10:59:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 11:07:08 +0000   Mon, 17 Mar 2025 10:59:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 11:07:08 +0000   Mon, 17 Mar 2025 10:59:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-507725
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	System Info:
	  Machine ID:                 9eb763a95d9b4e9fb768130dae7e03ee
	  System UUID:                8fb7f3f3-791a-47b3-80f7-6ddbcbe87a67
	  Boot ID:                    6cdff8eb-9dff-46dc-b46a-15af38578335
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.25
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-c7scj                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m28s
	  kube-system                 etcd-pause-507725                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m33s
	  kube-system                 kindnet-dz8rm                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m28s
	  kube-system                 kube-apiserver-pause-507725             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m33s
	  kube-system                 kube-controller-manager-pause-507725    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m33s
	  kube-system                 kube-proxy-lmh8d                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m28s
	  kube-system                 kube-scheduler-pause-507725             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m28s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  9m39s (x8 over 9m39s)  kubelet          Node pause-507725 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m39s (x8 over 9m39s)  kubelet          Node pause-507725 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m39s (x7 over 9m39s)  kubelet          Node pause-507725 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 9m34s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  9m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9m33s                  kubelet          Node pause-507725 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m33s                  kubelet          Node pause-507725 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m33s                  kubelet          Node pause-507725 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m29s                  node-controller  Node pause-507725 event: Registered Node pause-507725 in Controller
	
	
	==> dmesg <==
	[  +1.010472] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-81e0001ceae7
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-81e0001ceae7
	[  +0.000006] ll header: 00000000: 6e 6a cf 1c 79 e6 4a 28 c7 6c 46 af 08 00
	[  -0.000001] ll header: 00000000: 6e 6a cf 1c 79 e6 4a 28 c7 6c 46 af 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-81e0001ceae7
	[  +0.000002] ll header: 00000000: 6e 6a cf 1c 79 e6 4a 28 c7 6c 46 af 08 00
	[  +2.011808] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-81e0001ceae7
	[  +0.000007] ll header: 00000000: 6e 6a cf 1c 79 e6 4a 28 c7 6c 46 af 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-81e0001ceae7
	[  +0.000001] ll header: 00000000: 6e 6a cf 1c 79 e6 4a 28 c7 6c 46 af 08 00
	[  +0.003979] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-81e0001ceae7
	[  +0.000006] ll header: 00000000: 6e 6a cf 1c 79 e6 4a 28 c7 6c 46 af 08 00
	[  +4.123642] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-81e0001ceae7
	[  +0.000007] ll header: 00000000: 6e 6a cf 1c 79 e6 4a 28 c7 6c 46 af 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-81e0001ceae7
	[  +0.000001] ll header: 00000000: 6e 6a cf 1c 79 e6 4a 28 c7 6c 46 af 08 00
	[  +8.191265] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-81e0001ceae7
	[  +0.000006] ll header: 00000000: 6e 6a cf 1c 79 e6 4a 28 c7 6c 46 af 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-81e0001ceae7
	[  +0.000002] ll header: 00000000: 6e 6a cf 1c 79 e6 4a 28 c7 6c 46 af 08 00
	[Mar17 10:54] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3d29cf6460ef
	[  +0.000005] ll header: 00000000: 1e ab 6c 22 c8 11 ee 9e 42 a2 db 99 08 00
	[  +1.001464] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3d29cf6460ef
	[  +0.000007] ll header: 00000000: 1e ab 6c 22 c8 11 ee 9e 42 a2 db 99 08 00
	[Mar17 10:57] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2] <==
	{"level":"info","ts":"2025-03-17T10:59:25.734902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-03-17T10:59:25.734943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-03-17T10:59:25.734965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-03-17T10:59:25.734978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-03-17T10:59:25.734992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-03-17T10:59:25.735007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-03-17T10:59:25.736088Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T10:59:25.736252Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T10:59:25.736253Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:pause-507725 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-03-17T10:59:25.736278Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T10:59:25.736487Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-03-17T10:59:25.736507Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-03-17T10:59:25.736741Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T10:59:25.736854Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T10:59:25.736980Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T10:59:25.737134Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T10:59:25.737247Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T10:59:25.737836Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-03-17T10:59:25.737849Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-03-17T11:00:05.250083Z","caller":"traceutil/trace.go:171","msg":"trace[382279266] linearizableReadLoop","detail":"{readStateIndex:449; appliedIndex:448; }","duration":"132.68516ms","start":"2025-03-17T11:00:05.117363Z","end":"2025-03-17T11:00:05.250048Z","steps":["trace[382279266] 'read index received'  (duration: 71.38496ms)","trace[382279266] 'applied index is now lower than readState.Index'  (duration: 61.299182ms)"],"step_count":2}
	{"level":"info","ts":"2025-03-17T11:00:05.250247Z","caller":"traceutil/trace.go:171","msg":"trace[257573388] transaction","detail":"{read_only:false; response_revision:429; number_of_response:1; }","duration":"135.125308ms","start":"2025-03-17T11:00:05.115098Z","end":"2025-03-17T11:00:05.250224Z","steps":["trace[257573388] 'process raft request'  (duration: 73.736927ms)","trace[257573388] 'compare'  (duration: 61.043363ms)"],"step_count":2}
	{"level":"warn","ts":"2025-03-17T11:00:05.250319Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.886951ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/kindnet-dz8rm.182d92048ece9662\" limit:1 ","response":"range_response_count:1 size:714"}
	{"level":"info","ts":"2025-03-17T11:00:05.250379Z","caller":"traceutil/trace.go:171","msg":"trace[202133278] range","detail":"{range_begin:/registry/events/kube-system/kindnet-dz8rm.182d92048ece9662; range_end:; response_count:1; response_revision:429; }","duration":"133.04748ms","start":"2025-03-17T11:00:05.117321Z","end":"2025-03-17T11:00:05.250368Z","steps":["trace[202133278] 'agreement among raft nodes before linearized reading'  (duration: 132.864758ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T11:00:52.279222Z","caller":"traceutil/trace.go:171","msg":"trace[683652511] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"103.989191ms","start":"2025-03-17T11:00:52.175210Z","end":"2025-03-17T11:00:52.279199Z","steps":["trace[683652511] 'process raft request'  (duration: 103.867147ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T11:02:29.364796Z","caller":"traceutil/trace.go:171","msg":"trace[1526124624] transaction","detail":"{read_only:false; response_revision:511; number_of_response:1; }","duration":"125.268917ms","start":"2025-03-17T11:02:29.239501Z","end":"2025-03-17T11:02:29.364770Z","steps":["trace[1526124624] 'process raft request'  (duration: 62.477978ms)","trace[1526124624] 'compare'  (duration: 62.665171ms)"],"step_count":2}
	
	
	==> kernel <==
	 11:09:04 up 50 min,  0 users,  load average: 0.49, 1.03, 1.35
	Linux pause-507725 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01] <==
	I0317 10:59:27.804159       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0317 10:59:27.803742       1 aggregator.go:171] initial CRD sync complete...
	I0317 10:59:27.804698       1 autoregister_controller.go:144] Starting autoregister controller
	I0317 10:59:27.804803       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0317 10:59:27.804917       1 cache.go:39] Caches are synced for autoregister controller
	I0317 10:59:27.806417       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0317 10:59:27.806454       1 policy_source.go:240] refreshing policies
	E0317 10:59:27.807951       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0317 10:59:27.808291       1 controller.go:615] quota admission added evaluator for: namespaces
	I0317 10:59:28.012886       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0317 10:59:28.657870       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0317 10:59:28.662425       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0317 10:59:28.662444       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0317 10:59:29.084293       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0317 10:59:29.117199       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0317 10:59:29.214127       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0317 10:59:29.220784       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I0317 10:59:29.221856       1 controller.go:615] quota admission added evaluator for: endpoints
	I0317 10:59:29.225687       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0317 10:59:29.723620       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0317 10:59:30.104918       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0317 10:59:30.117999       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0317 10:59:30.125918       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0317 10:59:35.225517       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0317 10:59:35.310809       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718] <==
	I0317 10:59:34.274339       1 shared_informer.go:320] Caches are synced for stateful set
	I0317 10:59:34.274341       1 shared_informer.go:320] Caches are synced for deployment
	I0317 10:59:34.277142       1 shared_informer.go:320] Caches are synced for node
	I0317 10:59:34.277193       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0317 10:59:34.277224       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0317 10:59:34.277232       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0317 10:59:34.277238       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0317 10:59:34.278714       1 shared_informer.go:320] Caches are synced for resource quota
	I0317 10:59:34.286825       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-507725" podCIDRs=["10.244.0.0/24"]
	I0317 10:59:34.286868       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-507725"
	I0317 10:59:34.286894       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-507725"
	I0317 10:59:34.291159       1 shared_informer.go:320] Caches are synced for garbage collector
	I0317 10:59:35.215947       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-507725"
	I0317 10:59:35.427587       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="198.785584ms"
	I0317 10:59:35.434273       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="6.638713ms"
	I0317 10:59:35.434365       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="54.51µs"
	I0317 10:59:35.441218       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="138.723µs"
	I0317 10:59:35.549314       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="11.571425ms"
	I0317 10:59:35.553853       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="4.496661ms"
	I0317 10:59:35.553967       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="74.549µs"
	I0317 10:59:37.144576       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="59.695µs"
	I0317 10:59:37.152066       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="62.37µs"
	I0317 10:59:37.154638       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="53.421µs"
	I0317 10:59:40.342920       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-507725"
	I0317 11:07:08.740472       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-507725"
	
	
	==> kube-proxy [491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c] <==
	I0317 10:59:35.790303       1 server_linux.go:66] "Using iptables proxy"
	I0317 10:59:35.902940       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	E0317 10:59:35.903001       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0317 10:59:35.925931       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0317 10:59:35.925994       1 server_linux.go:170] "Using iptables Proxier"
	I0317 10:59:35.927885       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0317 10:59:35.928374       1 server.go:497] "Version info" version="v1.32.2"
	I0317 10:59:35.928408       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 10:59:35.929769       1 config.go:199] "Starting service config controller"
	I0317 10:59:35.929793       1 config.go:105] "Starting endpoint slice config controller"
	I0317 10:59:35.929838       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0317 10:59:35.929910       1 config.go:329] "Starting node config controller"
	I0317 10:59:35.929923       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0317 10:59:35.929985       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0317 10:59:36.030460       1 shared_informer.go:320] Caches are synced for node config
	I0317 10:59:36.030476       1 shared_informer.go:320] Caches are synced for service config
	I0317 10:59:36.030517       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373] <==
	W0317 10:59:28.625567       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0317 10:59:28.625615       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0317 10:59:28.644350       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0317 10:59:28.644390       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 10:59:28.660824       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0317 10:59:28.660870       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 10:59:28.668130       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0317 10:59:28.668172       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 10:59:28.727543       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0317 10:59:28.727593       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0317 10:59:28.749160       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0317 10:59:28.749207       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 10:59:28.803771       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0317 10:59:28.803822       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 10:59:28.808292       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0317 10:59:28.808330       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 10:59:28.848911       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0317 10:59:28.848981       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 10:59:28.856388       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0317 10:59:28.856431       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 10:59:28.909858       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0317 10:59:28.909917       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 10:59:28.927455       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0317 10:59:28.927499       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0317 10:59:30.833445       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 17 11:08:06 pause-507725 kubelet[1653]: E0317 11:08:06.126486    1653 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cece04d25a506ff93fdb0bff78098cd9b5c3b3815494653e16549b9f79ceff1\": failed to find network info for sandbox \"2cece04d25a506ff93fdb0bff78098cd9b5c3b3815494653e16549b9f79ceff1\""
	Mar 17 11:08:06 pause-507725 kubelet[1653]: E0317 11:08:06.126561    1653 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cece04d25a506ff93fdb0bff78098cd9b5c3b3815494653e16549b9f79ceff1\": failed to find network info for sandbox \"2cece04d25a506ff93fdb0bff78098cd9b5c3b3815494653e16549b9f79ceff1\"" pod="kube-system/coredns-668d6bf9bc-c7scj"
	Mar 17 11:08:06 pause-507725 kubelet[1653]: E0317 11:08:06.126583    1653 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cece04d25a506ff93fdb0bff78098cd9b5c3b3815494653e16549b9f79ceff1\": failed to find network info for sandbox \"2cece04d25a506ff93fdb0bff78098cd9b5c3b3815494653e16549b9f79ceff1\"" pod="kube-system/coredns-668d6bf9bc-c7scj"
	Mar 17 11:08:06 pause-507725 kubelet[1653]: E0317 11:08:06.126632    1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-c7scj_kube-system(1f683caa-60d7-44f8-b772-ab187e908994)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-c7scj_kube-system(1f683caa-60d7-44f8-b772-ab187e908994)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2cece04d25a506ff93fdb0bff78098cd9b5c3b3815494653e16549b9f79ceff1\\\": failed to find network info for sandbox \\\"2cece04d25a506ff93fdb0bff78098cd9b5c3b3815494653e16549b9f79ceff1\\\"\"" pod="kube-system/coredns-668d6bf9bc-c7scj" podUID="1f683caa-60d7-44f8-b772-ab187e908994"
	Mar 17 11:08:08 pause-507725 kubelet[1653]: E0317 11:08:08.107480    1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-dz8rm" podUID="c7a272d4-8d2d-45e7-af98-bfb37db11888"
	Mar 17 11:08:19 pause-507725 kubelet[1653]: E0317 11:08:19.107663    1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-dz8rm" podUID="c7a272d4-8d2d-45e7-af98-bfb37db11888"
	Mar 17 11:08:20 pause-507725 kubelet[1653]: E0317 11:08:20.129699    1653 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f327ecf216a3eb3b48af776e1ca22118e271e2207cd080720fc402766a6c7f\": failed to find network info for sandbox \"a5f327ecf216a3eb3b48af776e1ca22118e271e2207cd080720fc402766a6c7f\""
	Mar 17 11:08:20 pause-507725 kubelet[1653]: E0317 11:08:20.129772    1653 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f327ecf216a3eb3b48af776e1ca22118e271e2207cd080720fc402766a6c7f\": failed to find network info for sandbox \"a5f327ecf216a3eb3b48af776e1ca22118e271e2207cd080720fc402766a6c7f\"" pod="kube-system/coredns-668d6bf9bc-c7scj"
	Mar 17 11:08:20 pause-507725 kubelet[1653]: E0317 11:08:20.129794    1653 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f327ecf216a3eb3b48af776e1ca22118e271e2207cd080720fc402766a6c7f\": failed to find network info for sandbox \"a5f327ecf216a3eb3b48af776e1ca22118e271e2207cd080720fc402766a6c7f\"" pod="kube-system/coredns-668d6bf9bc-c7scj"
	Mar 17 11:08:20 pause-507725 kubelet[1653]: E0317 11:08:20.129837    1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-c7scj_kube-system(1f683caa-60d7-44f8-b772-ab187e908994)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-c7scj_kube-system(1f683caa-60d7-44f8-b772-ab187e908994)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5f327ecf216a3eb3b48af776e1ca22118e271e2207cd080720fc402766a6c7f\\\": failed to find network info for sandbox \\\"a5f327ecf216a3eb3b48af776e1ca22118e271e2207cd080720fc402766a6c7f\\\"\"" pod="kube-system/coredns-668d6bf9bc-c7scj" podUID="1f683caa-60d7-44f8-b772-ab187e908994"
	Mar 17 11:08:31 pause-507725 kubelet[1653]: E0317 11:08:31.107718    1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-dz8rm" podUID="c7a272d4-8d2d-45e7-af98-bfb37db11888"
	Mar 17 11:08:33 pause-507725 kubelet[1653]: E0317 11:08:33.126860    1653 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20894be95d7ab83c1e866a46e8247255c9379c653ced69b9b82f83241591da60\": failed to find network info for sandbox \"20894be95d7ab83c1e866a46e8247255c9379c653ced69b9b82f83241591da60\""
	Mar 17 11:08:33 pause-507725 kubelet[1653]: E0317 11:08:33.126948    1653 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20894be95d7ab83c1e866a46e8247255c9379c653ced69b9b82f83241591da60\": failed to find network info for sandbox \"20894be95d7ab83c1e866a46e8247255c9379c653ced69b9b82f83241591da60\"" pod="kube-system/coredns-668d6bf9bc-c7scj"
	Mar 17 11:08:33 pause-507725 kubelet[1653]: E0317 11:08:33.126976    1653 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20894be95d7ab83c1e866a46e8247255c9379c653ced69b9b82f83241591da60\": failed to find network info for sandbox \"20894be95d7ab83c1e866a46e8247255c9379c653ced69b9b82f83241591da60\"" pod="kube-system/coredns-668d6bf9bc-c7scj"
	Mar 17 11:08:33 pause-507725 kubelet[1653]: E0317 11:08:33.127046    1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-c7scj_kube-system(1f683caa-60d7-44f8-b772-ab187e908994)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-c7scj_kube-system(1f683caa-60d7-44f8-b772-ab187e908994)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20894be95d7ab83c1e866a46e8247255c9379c653ced69b9b82f83241591da60\\\": failed to find network info for sandbox \\\"20894be95d7ab83c1e866a46e8247255c9379c653ced69b9b82f83241591da60\\\"\"" pod="kube-system/coredns-668d6bf9bc-c7scj" podUID="1f683caa-60d7-44f8-b772-ab187e908994"
	Mar 17 11:08:42 pause-507725 kubelet[1653]: E0317 11:08:42.108005    1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-dz8rm" podUID="c7a272d4-8d2d-45e7-af98-bfb37db11888"
	Mar 17 11:08:46 pause-507725 kubelet[1653]: E0317 11:08:46.124998    1653 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9def6658e512ac91e511594d5f26465fbe6c137e614b1425b6b3a299fbfd477a\": failed to find network info for sandbox \"9def6658e512ac91e511594d5f26465fbe6c137e614b1425b6b3a299fbfd477a\""
	Mar 17 11:08:46 pause-507725 kubelet[1653]: E0317 11:08:46.125086    1653 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9def6658e512ac91e511594d5f26465fbe6c137e614b1425b6b3a299fbfd477a\": failed to find network info for sandbox \"9def6658e512ac91e511594d5f26465fbe6c137e614b1425b6b3a299fbfd477a\"" pod="kube-system/coredns-668d6bf9bc-c7scj"
	Mar 17 11:08:46 pause-507725 kubelet[1653]: E0317 11:08:46.125120    1653 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9def6658e512ac91e511594d5f26465fbe6c137e614b1425b6b3a299fbfd477a\": failed to find network info for sandbox \"9def6658e512ac91e511594d5f26465fbe6c137e614b1425b6b3a299fbfd477a\"" pod="kube-system/coredns-668d6bf9bc-c7scj"
	Mar 17 11:08:46 pause-507725 kubelet[1653]: E0317 11:08:46.125172    1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-c7scj_kube-system(1f683caa-60d7-44f8-b772-ab187e908994)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-c7scj_kube-system(1f683caa-60d7-44f8-b772-ab187e908994)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9def6658e512ac91e511594d5f26465fbe6c137e614b1425b6b3a299fbfd477a\\\": failed to find network info for sandbox \\\"9def6658e512ac91e511594d5f26465fbe6c137e614b1425b6b3a299fbfd477a\\\"\"" pod="kube-system/coredns-668d6bf9bc-c7scj" podUID="1f683caa-60d7-44f8-b772-ab187e908994"
	Mar 17 11:08:55 pause-507725 kubelet[1653]: E0317 11:08:55.108214    1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-dz8rm" podUID="c7a272d4-8d2d-45e7-af98-bfb37db11888"
	Mar 17 11:08:57 pause-507725 kubelet[1653]: E0317 11:08:57.127399    1653 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"691a0098b203b807285d7b05c186b3d83b02c7ce230429ecd7c45dae277e7439\": failed to find network info for sandbox \"691a0098b203b807285d7b05c186b3d83b02c7ce230429ecd7c45dae277e7439\""
	Mar 17 11:08:57 pause-507725 kubelet[1653]: E0317 11:08:57.127487    1653 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"691a0098b203b807285d7b05c186b3d83b02c7ce230429ecd7c45dae277e7439\": failed to find network info for sandbox \"691a0098b203b807285d7b05c186b3d83b02c7ce230429ecd7c45dae277e7439\"" pod="kube-system/coredns-668d6bf9bc-c7scj"
	Mar 17 11:08:57 pause-507725 kubelet[1653]: E0317 11:08:57.127521    1653 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"691a0098b203b807285d7b05c186b3d83b02c7ce230429ecd7c45dae277e7439\": failed to find network info for sandbox \"691a0098b203b807285d7b05c186b3d83b02c7ce230429ecd7c45dae277e7439\"" pod="kube-system/coredns-668d6bf9bc-c7scj"
	Mar 17 11:08:57 pause-507725 kubelet[1653]: E0317 11:08:57.127586    1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-c7scj_kube-system(1f683caa-60d7-44f8-b772-ab187e908994)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-c7scj_kube-system(1f683caa-60d7-44f8-b772-ab187e908994)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"691a0098b203b807285d7b05c186b3d83b02c7ce230429ecd7c45dae277e7439\\\": failed to find network info for sandbox \\\"691a0098b203b807285d7b05c186b3d83b02c7ce230429ecd7c45dae277e7439\\\"\"" pod="kube-system/coredns-668d6bf9bc-c7scj" podUID="1f683caa-60d7-44f8-b772-ab187e908994"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-507725 -n pause-507725
helpers_test.go:261: (dbg) Run:  kubectl --context pause-507725 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-668d6bf9bc-c7scj kindnet-dz8rm
helpers_test.go:274: ======> post-mortem[TestPause/serial/Start]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context pause-507725 describe pod coredns-668d6bf9bc-c7scj kindnet-dz8rm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context pause-507725 describe pod coredns-668d6bf9bc-c7scj kindnet-dz8rm: exit status 1 (59.147675ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-c7scj" not found
	Error from server (NotFound): pods "kindnet-dz8rm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context pause-507725 describe pod coredns-668d6bf9bc-c7scj kindnet-dz8rm: exit status 1
--- FAIL: TestPause/serial/Start (593.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (1147.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-236437 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kindnet-236437 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: exit status 80 (19m7.426100549s)

                                                
                                                
-- stdout --
	* [kindnet-236437] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20535
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "kindnet-236437" primary control-plane node in "kindnet-236437" cluster
	* Pulling base image v0.0.46-1741860993-20523 ...
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	* Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 11:00:47.212024  261225 out.go:345] Setting OutFile to fd 1 ...
	I0317 11:00:47.212147  261225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:00:47.212157  261225 out.go:358] Setting ErrFile to fd 2...
	I0317 11:00:47.212163  261225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:00:47.212365  261225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
	I0317 11:00:47.212997  261225 out.go:352] Setting JSON to false
	I0317 11:00:47.214296  261225 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2540,"bootTime":1742206707,"procs":350,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 11:00:47.214405  261225 start.go:139] virtualization: kvm guest
	I0317 11:00:47.216295  261225 out.go:177] * [kindnet-236437] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 11:00:47.217547  261225 out.go:177]   - MINIKUBE_LOCATION=20535
	I0317 11:00:47.217590  261225 notify.go:220] Checking for updates...
	I0317 11:00:47.219721  261225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 11:00:47.220874  261225 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 11:00:47.221882  261225 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	I0317 11:00:47.222912  261225 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 11:00:47.223832  261225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 11:00:47.225096  261225 config.go:182] Loaded profile config "auto-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:00:47.225199  261225 config.go:182] Loaded profile config "kubernetes-upgrade-038579": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:00:47.225278  261225 config.go:182] Loaded profile config "pause-507725": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:00:47.225345  261225 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 11:00:47.248965  261225 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 11:00:47.249081  261225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 11:00:47.303207  261225 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:true NGoroutines:74 SystemTime:2025-03-17 11:00:47.292397543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 11:00:47.303356  261225 docker.go:318] overlay module found
	I0317 11:00:47.305007  261225 out.go:177] * Using the docker driver based on user configuration
	I0317 11:00:47.306158  261225 start.go:297] selected driver: docker
	I0317 11:00:47.306170  261225 start.go:901] validating driver "docker" against <nil>
	I0317 11:00:47.306180  261225 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 11:00:47.306958  261225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 11:00:47.354691  261225 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:74 SystemTime:2025-03-17 11:00:47.34611839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 11:00:47.354896  261225 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 11:00:47.355097  261225 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 11:00:47.356773  261225 out.go:177] * Using Docker driver with root privileges
	I0317 11:00:47.357871  261225 cni.go:84] Creating CNI manager for "kindnet"
	I0317 11:00:47.357888  261225 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0317 11:00:47.357943  261225 start.go:340] cluster config:
	{Name:kindnet-236437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kindnet-236437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:00:47.359095  261225 out.go:177] * Starting "kindnet-236437" primary control-plane node in "kindnet-236437" cluster
	I0317 11:00:47.360142  261225 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0317 11:00:47.361171  261225 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0317 11:00:47.362030  261225 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:00:47.362067  261225 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0317 11:00:47.362084  261225 cache.go:56] Caching tarball of preloaded images
	I0317 11:00:47.362134  261225 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0317 11:00:47.362231  261225 preload.go:172] Found /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 11:00:47.362251  261225 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0317 11:00:47.362349  261225 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/config.json ...
	I0317 11:00:47.362373  261225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/config.json: {Name:mk798154d21f6f85b7ace5cc1e6766ad2d16f9ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:00:47.381969  261225 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0317 11:00:47.381986  261225 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0317 11:00:47.382001  261225 cache.go:230] Successfully downloaded all kic artifacts
	I0317 11:00:47.382029  261225 start.go:360] acquireMachinesLock for kindnet-236437: {Name:mk00fa89ca6524284d0f7b87d08529d4c0119672 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 11:00:47.382122  261225 start.go:364] duration metric: took 73.178µs to acquireMachinesLock for "kindnet-236437"
	I0317 11:00:47.382143  261225 start.go:93] Provisioning new machine with config: &{Name:kindnet-236437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kindnet-236437 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 11:00:47.382201  261225 start.go:125] createHost starting for "" (driver="docker")
	I0317 11:00:47.383788  261225 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0317 11:00:47.383981  261225 start.go:159] libmachine.API.Create for "kindnet-236437" (driver="docker")
	I0317 11:00:47.384008  261225 client.go:168] LocalClient.Create starting
	I0317 11:00:47.384058  261225 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem
	I0317 11:00:47.384100  261225 main.go:141] libmachine: Decoding PEM data...
	I0317 11:00:47.384116  261225 main.go:141] libmachine: Parsing certificate...
	I0317 11:00:47.384174  261225 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem
	I0317 11:00:47.384195  261225 main.go:141] libmachine: Decoding PEM data...
	I0317 11:00:47.384205  261225 main.go:141] libmachine: Parsing certificate...
	I0317 11:00:47.384498  261225 cli_runner.go:164] Run: docker network inspect kindnet-236437 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0317 11:00:47.400185  261225 cli_runner.go:211] docker network inspect kindnet-236437 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0317 11:00:47.400246  261225 network_create.go:284] running [docker network inspect kindnet-236437] to gather additional debugging logs...
	I0317 11:00:47.400266  261225 cli_runner.go:164] Run: docker network inspect kindnet-236437
	W0317 11:00:47.416420  261225 cli_runner.go:211] docker network inspect kindnet-236437 returned with exit code 1
	I0317 11:00:47.416449  261225 network_create.go:287] error running [docker network inspect kindnet-236437]: docker network inspect kindnet-236437: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-236437 not found
	I0317 11:00:47.416465  261225 network_create.go:289] output of [docker network inspect kindnet-236437]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-236437 not found
	
	** /stderr **
	I0317 11:00:47.416595  261225 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 11:00:47.433205  261225 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6a2ef9d4bc68 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9a:4d:91:26:57:2c} reservation:<nil>}
	I0317 11:00:47.433886  261225 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-00bf62ef0133 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:c5:34:86:d6:21} reservation:<nil>}
	I0317 11:00:47.434562  261225 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-81e0001ceae7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:6a:cf:1c:79:e6} reservation:<nil>}
	I0317 11:00:47.435395  261225 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d97770}
	I0317 11:00:47.435416  261225 network_create.go:124] attempt to create docker network kindnet-236437 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0317 11:00:47.435459  261225 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-236437 kindnet-236437
	I0317 11:00:47.485526  261225 network_create.go:108] docker network kindnet-236437 192.168.76.0/24 created
	I0317 11:00:47.485556  261225 kic.go:121] calculated static IP "192.168.76.2" for the "kindnet-236437" container
	I0317 11:00:47.485625  261225 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0317 11:00:47.502216  261225 cli_runner.go:164] Run: docker volume create kindnet-236437 --label name.minikube.sigs.k8s.io=kindnet-236437 --label created_by.minikube.sigs.k8s.io=true
	I0317 11:00:47.519443  261225 oci.go:103] Successfully created a docker volume kindnet-236437
	I0317 11:00:47.519499  261225 cli_runner.go:164] Run: docker run --rm --name kindnet-236437-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-236437 --entrypoint /usr/bin/test -v kindnet-236437:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0317 11:00:47.938620  261225 oci.go:107] Successfully prepared a docker volume kindnet-236437
	I0317 11:00:47.938675  261225 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:00:47.938702  261225 kic.go:194] Starting extracting preloaded images to volume ...
	I0317 11:00:47.938772  261225 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-236437:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0317 11:00:52.497781  261225 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-236437:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.558967408s)
	I0317 11:00:52.497813  261225 kic.go:203] duration metric: took 4.559107385s to extract preloaded images to volume ...
	W0317 11:00:52.497930  261225 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0317 11:00:52.498027  261225 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0317 11:00:52.548125  261225 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-236437 --name kindnet-236437 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-236437 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-236437 --network kindnet-236437 --ip 192.168.76.2 --volume kindnet-236437:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0317 11:00:52.810040  261225 cli_runner.go:164] Run: docker container inspect kindnet-236437 --format={{.State.Running}}
	I0317 11:00:52.828899  261225 cli_runner.go:164] Run: docker container inspect kindnet-236437 --format={{.State.Status}}
	I0317 11:00:52.847754  261225 cli_runner.go:164] Run: docker exec kindnet-236437 stat /var/lib/dpkg/alternatives/iptables
	I0317 11:00:52.888410  261225 oci.go:144] the created container "kindnet-236437" has a running status.
	I0317 11:00:52.888445  261225 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20535-4918/.minikube/machines/kindnet-236437/id_rsa...
	I0317 11:00:53.571161  261225 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20535-4918/.minikube/machines/kindnet-236437/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0317 11:00:53.594446  261225 cli_runner.go:164] Run: docker container inspect kindnet-236437 --format={{.State.Status}}
	I0317 11:00:53.610679  261225 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0317 11:00:53.610701  261225 kic_runner.go:114] Args: [docker exec --privileged kindnet-236437 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0317 11:00:53.649018  261225 cli_runner.go:164] Run: docker container inspect kindnet-236437 --format={{.State.Status}}
	I0317 11:00:53.665449  261225 machine.go:93] provisionDockerMachine start ...
	I0317 11:00:53.665547  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-236437
	I0317 11:00:53.682721  261225 main.go:141] libmachine: Using SSH client type: native
	I0317 11:00:53.683046  261225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0317 11:00:53.683066  261225 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 11:00:53.814750  261225 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-236437
	
	I0317 11:00:53.814780  261225 ubuntu.go:169] provisioning hostname "kindnet-236437"
	I0317 11:00:53.814839  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-236437
	I0317 11:00:53.832096  261225 main.go:141] libmachine: Using SSH client type: native
	I0317 11:00:53.832303  261225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0317 11:00:53.832316  261225 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-236437 && echo "kindnet-236437" | sudo tee /etc/hostname
	I0317 11:00:53.974271  261225 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-236437
	
	I0317 11:00:53.974365  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-236437
	I0317 11:00:53.991734  261225 main.go:141] libmachine: Using SSH client type: native
	I0317 11:00:53.991991  261225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0317 11:00:53.992021  261225 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-236437' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-236437/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-236437' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 11:00:54.123172  261225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 11:00:54.123199  261225 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20535-4918/.minikube CaCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20535-4918/.minikube}
	I0317 11:00:54.123235  261225 ubuntu.go:177] setting up certificates
	I0317 11:00:54.123245  261225 provision.go:84] configureAuth start
	I0317 11:00:54.123315  261225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-236437
	I0317 11:00:54.142947  261225 provision.go:143] copyHostCerts
	I0317 11:00:54.143007  261225 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem, removing ...
	I0317 11:00:54.143019  261225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem
	I0317 11:00:54.143090  261225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem (1679 bytes)
	I0317 11:00:54.143194  261225 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem, removing ...
	I0317 11:00:54.143210  261225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem
	I0317 11:00:54.143244  261225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem (1082 bytes)
	I0317 11:00:54.143337  261225 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem, removing ...
	I0317 11:00:54.143364  261225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem
	I0317 11:00:54.143413  261225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem (1123 bytes)
	I0317 11:00:54.143487  261225 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem org=jenkins.kindnet-236437 san=[127.0.0.1 192.168.76.2 kindnet-236437 localhost minikube]
	I0317 11:00:54.272944  261225 provision.go:177] copyRemoteCerts
	I0317 11:00:54.272996  261225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 11:00:54.273032  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-236437
	I0317 11:00:54.290402  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/kindnet-236437/id_rsa Username:docker}
	I0317 11:00:54.385053  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 11:00:54.407686  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0317 11:00:54.431155  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 11:00:54.455112  261225 provision.go:87] duration metric: took 331.83921ms to configureAuth
	I0317 11:00:54.455146  261225 ubuntu.go:193] setting minikube options for container-runtime
	I0317 11:00:54.455342  261225 config.go:182] Loaded profile config "kindnet-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:00:54.455355  261225 machine.go:96] duration metric: took 789.880259ms to provisionDockerMachine
	I0317 11:00:54.455361  261225 client.go:171] duration metric: took 7.0713485s to LocalClient.Create
	I0317 11:00:54.455381  261225 start.go:167] duration metric: took 7.07140048s to libmachine.API.Create "kindnet-236437"
	I0317 11:00:54.455389  261225 start.go:293] postStartSetup for "kindnet-236437" (driver="docker")
	I0317 11:00:54.455401  261225 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 11:00:54.455460  261225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 11:00:54.455494  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-236437
	I0317 11:00:54.473005  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/kindnet-236437/id_rsa Username:docker}
	I0317 11:00:54.575913  261225 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 11:00:54.579230  261225 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0317 11:00:54.579294  261225 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0317 11:00:54.579314  261225 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0317 11:00:54.579323  261225 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0317 11:00:54.579336  261225 filesync.go:126] Scanning /home/jenkins/minikube-integration/20535-4918/.minikube/addons for local assets ...
	I0317 11:00:54.579393  261225 filesync.go:126] Scanning /home/jenkins/minikube-integration/20535-4918/.minikube/files for local assets ...
	I0317 11:00:54.579495  261225 filesync.go:149] local asset: /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem -> 116902.pem in /etc/ssl/certs
	I0317 11:00:54.579620  261225 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 11:00:54.588406  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem --> /etc/ssl/certs/116902.pem (1708 bytes)
	I0317 11:00:54.610796  261225 start.go:296] duration metric: took 155.383928ms for postStartSetup
	I0317 11:00:54.611159  261225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-236437
	I0317 11:00:54.630320  261225 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/config.json ...
	I0317 11:00:54.630609  261225 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 11:00:54.630661  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-236437
	I0317 11:00:54.652517  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/kindnet-236437/id_rsa Username:docker}
	I0317 11:00:54.747677  261225 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0317 11:00:54.751813  261225 start.go:128] duration metric: took 7.369597511s to createHost
	I0317 11:00:54.751850  261225 start.go:83] releasing machines lock for "kindnet-236437", held for 7.369716804s
	I0317 11:00:54.751937  261225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-236437
	I0317 11:00:54.772175  261225 ssh_runner.go:195] Run: cat /version.json
	I0317 11:00:54.772239  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-236437
	I0317 11:00:54.772325  261225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 11:00:54.772388  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-236437
	I0317 11:00:54.792494  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/kindnet-236437/id_rsa Username:docker}
	I0317 11:00:54.792731  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/kindnet-236437/id_rsa Username:docker}
	I0317 11:00:54.966700  261225 ssh_runner.go:195] Run: systemctl --version
	I0317 11:00:54.970727  261225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0317 11:00:54.974607  261225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0317 11:00:54.998129  261225 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0317 11:00:54.998198  261225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 11:00:55.023408  261225 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0317 11:00:55.023437  261225 start.go:495] detecting cgroup driver to use...
	I0317 11:00:55.023468  261225 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0317 11:00:55.023509  261225 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 11:00:55.036166  261225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 11:00:55.046722  261225 docker.go:217] disabling cri-docker service (if available) ...
	I0317 11:00:55.046772  261225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 11:00:55.058400  261225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 11:00:55.070884  261225 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 11:00:55.146196  261225 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 11:00:55.225022  261225 docker.go:233] disabling docker service ...
	I0317 11:00:55.225084  261225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 11:00:55.242967  261225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 11:00:55.253746  261225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 11:00:55.333791  261225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 11:00:55.408136  261225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 11:00:55.418476  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 11:00:55.433223  261225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 11:00:55.442027  261225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 11:00:55.451293  261225 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 11:00:55.451368  261225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 11:00:55.460389  261225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:00:55.469132  261225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 11:00:55.478535  261225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:00:55.487149  261225 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 11:00:55.495281  261225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 11:00:55.503909  261225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 11:00:55.512371  261225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 11:00:55.520979  261225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 11:00:55.528238  261225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 11:00:55.536760  261225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:00:55.615544  261225 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 11:00:55.725485  261225 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0317 11:00:55.725542  261225 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0317 11:00:55.728942  261225 start.go:563] Will wait 60s for crictl version
	I0317 11:00:55.729003  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:00:55.731984  261225 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 11:00:55.765377  261225 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.25
	RuntimeApiVersion:  v1
	I0317 11:00:55.765431  261225 ssh_runner.go:195] Run: containerd --version
	I0317 11:00:55.787594  261225 ssh_runner.go:195] Run: containerd --version
	I0317 11:00:55.810261  261225 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
	I0317 11:00:55.811517  261225 cli_runner.go:164] Run: docker network inspect kindnet-236437 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 11:00:55.827709  261225 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0317 11:00:55.831040  261225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:00:55.841101  261225 kubeadm.go:883] updating cluster {Name:kindnet-236437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kindnet-236437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 11:00:55.841223  261225 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:00:55.841290  261225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 11:00:55.872807  261225 containerd.go:627] all images are preloaded for containerd runtime.
	I0317 11:00:55.872830  261225 containerd.go:534] Images already preloaded, skipping extraction
	I0317 11:00:55.872878  261225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 11:00:55.904514  261225 containerd.go:627] all images are preloaded for containerd runtime.
	I0317 11:00:55.904532  261225 cache_images.go:84] Images are preloaded, skipping loading
	I0317 11:00:55.904544  261225 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 containerd true true} ...
	I0317 11:00:55.904617  261225 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-236437 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:kindnet-236437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0317 11:00:55.904662  261225 ssh_runner.go:195] Run: sudo crictl info
	I0317 11:00:55.939565  261225 cni.go:84] Creating CNI manager for "kindnet"
	I0317 11:00:55.939591  261225 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 11:00:55.939615  261225 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-236437 NodeName:kindnet-236437 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 11:00:55.939735  261225 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kindnet-236437"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 11:00:55.939813  261225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 11:00:55.948252  261225 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 11:00:55.948316  261225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 11:00:55.956495  261225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0317 11:00:55.972421  261225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 11:00:55.988412  261225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2304 bytes)
	I0317 11:00:56.004587  261225 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0317 11:00:56.007646  261225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:00:56.017106  261225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:00:56.095586  261225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:00:56.108052  261225 certs.go:68] Setting up /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437 for IP: 192.168.76.2
	I0317 11:00:56.108075  261225 certs.go:194] generating shared ca certs ...
	I0317 11:00:56.108096  261225 certs.go:226] acquiring lock for ca certs: {Name:mkf58624c63680e02907d28348d45986283847c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:00:56.108265  261225 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20535-4918/.minikube/ca.key
	I0317 11:00:56.108325  261225 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.key
	I0317 11:00:56.108339  261225 certs.go:256] generating profile certs ...
	I0317 11:00:56.108419  261225 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/client.key
	I0317 11:00:56.108444  261225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/client.crt with IP's: []
	I0317 11:00:56.163576  261225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/client.crt ...
	I0317 11:00:56.163603  261225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/client.crt: {Name:mkd60ba39e6d6e01007abba7759a9ebfc51cafa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:00:56.163751  261225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/client.key ...
	I0317 11:00:56.163762  261225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/client.key: {Name:mk106a646f58127983de438962148c8a680e31ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:00:56.163839  261225 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/apiserver.key.650c9677
	I0317 11:00:56.163853  261225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/apiserver.crt.650c9677 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0317 11:00:56.367466  261225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/apiserver.crt.650c9677 ...
	I0317 11:00:56.367494  261225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/apiserver.crt.650c9677: {Name:mk6673d882e92ab87191aa98d3b890b0ce27da41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:00:56.367642  261225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/apiserver.key.650c9677 ...
	I0317 11:00:56.367655  261225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/apiserver.key.650c9677: {Name:mkca56fa64727b535eaa8061d2539db25f002e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:00:56.367723  261225 certs.go:381] copying /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/apiserver.crt.650c9677 -> /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/apiserver.crt
	I0317 11:00:56.367792  261225 certs.go:385] copying /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/apiserver.key.650c9677 -> /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/apiserver.key
	I0317 11:00:56.367842  261225 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/proxy-client.key
	I0317 11:00:56.367855  261225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/proxy-client.crt with IP's: []
	I0317 11:00:56.791625  261225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/proxy-client.crt ...
	I0317 11:00:56.791655  261225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/proxy-client.crt: {Name:mk9f9af22c73e147c8321efe9fcd4cd433339492 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:00:56.791818  261225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/proxy-client.key ...
	I0317 11:00:56.791830  261225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/proxy-client.key: {Name:mkc8fd6c5ea0111ef22a014067472eea32c1b005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:00:56.791997  261225 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690.pem (1338 bytes)
	W0317 11:00:56.792032  261225 certs.go:480] ignoring /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690_empty.pem, impossibly tiny 0 bytes
	I0317 11:00:56.792043  261225 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 11:00:56.792066  261225 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem (1082 bytes)
	I0317 11:00:56.792087  261225 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem (1123 bytes)
	I0317 11:00:56.792111  261225 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem (1679 bytes)
	I0317 11:00:56.792154  261225 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem (1708 bytes)
	I0317 11:00:56.792716  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 11:00:56.815645  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 11:00:56.837017  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 11:00:56.858805  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 11:00:56.880034  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0317 11:00:56.900620  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 11:00:56.922355  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 11:00:56.944207  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/kindnet-236437/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 11:00:56.967116  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem --> /usr/share/ca-certificates/116902.pem (1708 bytes)
	I0317 11:00:56.991179  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 11:00:57.014957  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690.pem --> /usr/share/ca-certificates/11690.pem (1338 bytes)
	I0317 11:00:57.038512  261225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 11:00:57.054669  261225 ssh_runner.go:195] Run: openssl version
	I0317 11:00:57.059657  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116902.pem && ln -fs /usr/share/ca-certificates/116902.pem /etc/ssl/certs/116902.pem"
	I0317 11:00:57.068139  261225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116902.pem
	I0317 11:00:57.071447  261225 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:32 /usr/share/ca-certificates/116902.pem
	I0317 11:00:57.071498  261225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116902.pem
	I0317 11:00:57.077641  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116902.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 11:00:57.085765  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 11:00:57.094285  261225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:00:57.097385  261225 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:00:57.097456  261225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:00:57.103948  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 11:00:57.112651  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11690.pem && ln -fs /usr/share/ca-certificates/11690.pem /etc/ssl/certs/11690.pem"
	I0317 11:00:57.121396  261225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11690.pem
	I0317 11:00:57.124724  261225 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:32 /usr/share/ca-certificates/11690.pem
	I0317 11:00:57.124777  261225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11690.pem
	I0317 11:00:57.131405  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11690.pem /etc/ssl/certs/51391683.0"
	I0317 11:00:57.140114  261225 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 11:00:57.143150  261225 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 11:00:57.143209  261225 kubeadm.go:392] StartCluster: {Name:kindnet-236437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kindnet-236437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:00:57.143341  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0317 11:00:57.143387  261225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 11:00:57.176145  261225 cri.go:89] found id: ""
	I0317 11:00:57.176213  261225 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 11:00:57.184373  261225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 11:00:57.192551  261225 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0317 11:00:57.192602  261225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 11:00:57.200417  261225 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 11:00:57.200432  261225 kubeadm.go:157] found existing configuration files:
	
	I0317 11:00:57.200473  261225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 11:00:57.207950  261225 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 11:00:57.208016  261225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 11:00:57.215763  261225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 11:00:57.224047  261225 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 11:00:57.224109  261225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 11:00:57.231887  261225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 11:00:57.239509  261225 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 11:00:57.239560  261225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 11:00:57.247372  261225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 11:00:57.255692  261225 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 11:00:57.255737  261225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 11:00:57.263416  261225 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0317 11:00:57.318670  261225 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0317 11:00:57.319012  261225 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0317 11:00:57.374682  261225 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 11:01:06.328874  261225 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 11:01:06.328933  261225 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 11:01:06.329006  261225 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0317 11:01:06.329087  261225 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0317 11:01:06.329137  261225 kubeadm.go:310] OS: Linux
	I0317 11:01:06.329182  261225 kubeadm.go:310] CGROUPS_CPU: enabled
	I0317 11:01:06.329229  261225 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0317 11:01:06.329274  261225 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0317 11:01:06.329360  261225 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0317 11:01:06.329447  261225 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0317 11:01:06.329518  261225 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0317 11:01:06.329589  261225 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0317 11:01:06.329682  261225 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0317 11:01:06.329752  261225 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0317 11:01:06.329863  261225 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 11:01:06.329948  261225 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 11:01:06.330030  261225 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 11:01:06.330094  261225 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 11:01:06.331428  261225 out.go:235]   - Generating certificates and keys ...
	I0317 11:01:06.331499  261225 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 11:01:06.331571  261225 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 11:01:06.331670  261225 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 11:01:06.331737  261225 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 11:01:06.331798  261225 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 11:01:06.331847  261225 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 11:01:06.331894  261225 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 11:01:06.332010  261225 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-236437 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0317 11:01:06.332063  261225 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 11:01:06.332169  261225 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-236437 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0317 11:01:06.332271  261225 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 11:01:06.332368  261225 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 11:01:06.332440  261225 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 11:01:06.332518  261225 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 11:01:06.332570  261225 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 11:01:06.332681  261225 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 11:01:06.332800  261225 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 11:01:06.332919  261225 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 11:01:06.333027  261225 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 11:01:06.333148  261225 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 11:01:06.333226  261225 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 11:01:06.335041  261225 out.go:235]   - Booting up control plane ...
	I0317 11:01:06.335126  261225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 11:01:06.335213  261225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 11:01:06.335326  261225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 11:01:06.335438  261225 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 11:01:06.335511  261225 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 11:01:06.335553  261225 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 11:01:06.335667  261225 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 11:01:06.335817  261225 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 11:01:06.335909  261225 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.729122ms
	I0317 11:01:06.336017  261225 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 11:01:06.336112  261225 kubeadm.go:310] [api-check] The API server is healthy after 4.501334671s
	I0317 11:01:06.336237  261225 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 11:01:06.336395  261225 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 11:01:06.336478  261225 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 11:01:06.336679  261225 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-236437 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 11:01:06.336742  261225 kubeadm.go:310] [bootstrap-token] Using token: ndqzvj.xrqzxypbx2o5cvse
	I0317 11:01:06.337860  261225 out.go:235]   - Configuring RBAC rules ...
	I0317 11:01:06.337946  261225 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 11:01:06.338016  261225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 11:01:06.338147  261225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 11:01:06.338280  261225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 11:01:06.338384  261225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 11:01:06.338481  261225 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 11:01:06.338620  261225 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 11:01:06.338697  261225 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 11:01:06.338765  261225 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 11:01:06.338774  261225 kubeadm.go:310] 
	I0317 11:01:06.338860  261225 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 11:01:06.338870  261225 kubeadm.go:310] 
	I0317 11:01:06.338961  261225 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 11:01:06.338969  261225 kubeadm.go:310] 
	I0317 11:01:06.338990  261225 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 11:01:06.339044  261225 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 11:01:06.339093  261225 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 11:01:06.339099  261225 kubeadm.go:310] 
	I0317 11:01:06.339151  261225 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 11:01:06.339157  261225 kubeadm.go:310] 
	I0317 11:01:06.339196  261225 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 11:01:06.339202  261225 kubeadm.go:310] 
	I0317 11:01:06.339290  261225 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 11:01:06.339402  261225 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 11:01:06.339506  261225 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 11:01:06.339516  261225 kubeadm.go:310] 
	I0317 11:01:06.339635  261225 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 11:01:06.339747  261225 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 11:01:06.339756  261225 kubeadm.go:310] 
	I0317 11:01:06.339852  261225 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ndqzvj.xrqzxypbx2o5cvse \
	I0317 11:01:06.339946  261225 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fbbd8e832ea7aa08371d4fcc88b71c8e29c98bed7a9a4feed9bf5043f7b52578 \
	I0317 11:01:06.339966  261225 kubeadm.go:310] 	--control-plane 
	I0317 11:01:06.339972  261225 kubeadm.go:310] 
	I0317 11:01:06.340070  261225 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 11:01:06.340090  261225 kubeadm.go:310] 
	I0317 11:01:06.340206  261225 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ndqzvj.xrqzxypbx2o5cvse \
	I0317 11:01:06.340368  261225 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fbbd8e832ea7aa08371d4fcc88b71c8e29c98bed7a9a4feed9bf5043f7b52578 
	I0317 11:01:06.340381  261225 cni.go:84] Creating CNI manager for "kindnet"
	I0317 11:01:06.342389  261225 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0317 11:01:06.343377  261225 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0317 11:01:06.347401  261225 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0317 11:01:06.347420  261225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0317 11:01:06.364642  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0317 11:01:06.569830  261225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 11:01:06.569921  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:01:06.569939  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-236437 minikube.k8s.io/updated_at=2025_03_17T11_01_06_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76 minikube.k8s.io/name=kindnet-236437 minikube.k8s.io/primary=true
	I0317 11:01:06.707336  261225 ops.go:34] apiserver oom_adj: -16
	I0317 11:01:06.707462  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:01:07.208363  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:01:07.708151  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:01:08.208439  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:01:08.708287  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:01:09.208466  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:01:09.707958  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:01:10.208310  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:01:10.278852  261225 kubeadm.go:1113] duration metric: took 3.708994056s to wait for elevateKubeSystemPrivileges
	I0317 11:01:10.278891  261225 kubeadm.go:394] duration metric: took 13.135686883s to StartCluster
	I0317 11:01:10.278914  261225 settings.go:142] acquiring lock: {Name:mk2a57d556efff40ccd4336229d7a78216b861f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:01:10.279001  261225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 11:01:10.280874  261225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/kubeconfig: {Name:mk686b9f6159ab958672b945ae0aa5a9c96e9ecc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:01:10.281136  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 11:01:10.281146  261225 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 11:01:10.281242  261225 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 11:01:10.281334  261225 config.go:182] Loaded profile config "kindnet-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:01:10.281344  261225 addons.go:69] Setting storage-provisioner=true in profile "kindnet-236437"
	I0317 11:01:10.281356  261225 addons.go:69] Setting default-storageclass=true in profile "kindnet-236437"
	I0317 11:01:10.281370  261225 addons.go:238] Setting addon storage-provisioner=true in "kindnet-236437"
	I0317 11:01:10.281409  261225 host.go:66] Checking if "kindnet-236437" exists ...
	I0317 11:01:10.281372  261225 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-236437"
	I0317 11:01:10.281759  261225 cli_runner.go:164] Run: docker container inspect kindnet-236437 --format={{.State.Status}}
	I0317 11:01:10.281954  261225 cli_runner.go:164] Run: docker container inspect kindnet-236437 --format={{.State.Status}}
	I0317 11:01:10.283114  261225 out.go:177] * Verifying Kubernetes components...
	I0317 11:01:10.284233  261225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:01:10.309197  261225 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:01:10.310415  261225 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 11:01:10.310442  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 11:01:10.310496  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-236437
	I0317 11:01:10.310701  261225 addons.go:238] Setting addon default-storageclass=true in "kindnet-236437"
	I0317 11:01:10.310734  261225 host.go:66] Checking if "kindnet-236437" exists ...
	I0317 11:01:10.311097  261225 cli_runner.go:164] Run: docker container inspect kindnet-236437 --format={{.State.Status}}
	I0317 11:01:10.338754  261225 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 11:01:10.338780  261225 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 11:01:10.338844  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-236437
	I0317 11:01:10.351386  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/kindnet-236437/id_rsa Username:docker}
	I0317 11:01:10.369007  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/kindnet-236437/id_rsa Username:docker}
	I0317 11:01:10.413700  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 11:01:10.428129  261225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:01:10.529972  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 11:01:10.706926  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 11:01:11.125499  261225 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0317 11:01:11.128496  261225 node_ready.go:35] waiting up to 15m0s for node "kindnet-236437" to be "Ready" ...
	I0317 11:01:11.137269  261225 node_ready.go:49] node "kindnet-236437" has status "Ready":"True"
	I0317 11:01:11.137298  261225 node_ready.go:38] duration metric: took 8.770115ms for node "kindnet-236437" to be "Ready" ...
	I0317 11:01:11.137309  261225 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:01:11.143842  261225 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace to be "Ready" ...
	I0317 11:01:11.615851  261225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.085836621s)
	I0317 11:01:11.622833  261225 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0317 11:01:11.624060  261225 addons.go:514] duration metric: took 1.342818035s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0317 11:01:11.629087  261225 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-236437" context rescaled to 1 replicas
	I0317 11:01:13.149604  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:01:15.648890  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:01:17.648983  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:01:20.148317  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:01:22.148670  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:01:24.648644  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:01:26.649326  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:01:29.148725  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:01:31.148794  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:01:33.149158  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:01:35.149388  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:01:37.648715  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:01:39.649918  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:01:42.148964  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:01:44.648386  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:01:46.649433  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:01:49.148867  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:01:51.649300  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:01:54.148349  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:01:56.148665  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:01:58.648700  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:00.648866  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:03.148432  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:05.148952  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:07.648345  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:09.648963  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:11.653302  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:14.150169  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:16.648637  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:18.650227  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:21.149521  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:23.149814  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:25.650135  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:28.149376  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:30.649199  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:33.148942  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:35.648977  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:37.649404  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:40.148990  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:42.649172  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:44.649482  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:47.148798  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:49.149631  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:51.648685  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:54.148382  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:56.648833  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:59.147863  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:01.148827  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:03.648469  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:06.148443  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:08.148993  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:10.149164  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:12.648921  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:15.148704  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:17.649237  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:20.148773  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:22.648948  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:25.148737  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:27.648894  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:30.148407  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:32.149154  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:34.648908  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:37.149211  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:39.149731  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:41.648838  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:44.148415  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:46.148535  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:48.148792  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:50.649053  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:52.649095  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:55.148710  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:57.149810  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:59.648763  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:02.148762  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:04.648885  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:06.649127  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:09.148372  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:11.149273  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:13.648760  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:16.150790  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:18.648614  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:20.649479  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:23.148451  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:25.149265  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:27.648526  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:29.649214  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:31.649666  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:34.148783  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:36.148832  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:38.648736  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:40.648879  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:43.148696  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:45.648517  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:47.648598  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:49.649521  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:52.148897  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:54.149147  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:56.149457  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:58.648543  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:00.649803  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:03.149068  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:05.648866  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:07.649064  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:09.649491  261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:11.149005  261225 pod_ready.go:82] duration metric: took 4m0.005124542s for pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace to be "Ready" ...
	E0317 11:05:11.149032  261225 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0317 11:05:11.149044  261225 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-wht7f" in "kube-system" namespace to be "Ready" ...
	I0317 11:05:11.150773  261225 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-wht7f" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-wht7f" not found
	I0317 11:05:11.150799  261225 pod_ready.go:82] duration metric: took 1.746139ms for pod "coredns-668d6bf9bc-wht7f" in "kube-system" namespace to be "Ready" ...
	E0317 11:05:11.150812  261225 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-wht7f" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-wht7f" not found
	I0317 11:05:11.150820  261225 pod_ready.go:79] waiting up to 15m0s for pod "etcd-kindnet-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:05:11.154478  261225 pod_ready.go:93] pod "etcd-kindnet-236437" in "kube-system" namespace has status "Ready":"True"
	I0317 11:05:11.154495  261225 pod_ready.go:82] duration metric: took 3.667556ms for pod "etcd-kindnet-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:05:11.154505  261225 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-kindnet-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:05:11.158180  261225 pod_ready.go:93] pod "kube-apiserver-kindnet-236437" in "kube-system" namespace has status "Ready":"True"
	I0317 11:05:11.158198  261225 pod_ready.go:82] duration metric: took 3.686563ms for pod "kube-apiserver-kindnet-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:05:11.158206  261225 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-kindnet-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:05:11.161883  261225 pod_ready.go:93] pod "kube-controller-manager-kindnet-236437" in "kube-system" namespace has status "Ready":"True"
	I0317 11:05:11.161902  261225 pod_ready.go:82] duration metric: took 3.688883ms for pod "kube-controller-manager-kindnet-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:05:11.161912  261225 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-sr64l" in "kube-system" namespace to be "Ready" ...
	I0317 11:05:11.347703  261225 pod_ready.go:93] pod "kube-proxy-sr64l" in "kube-system" namespace has status "Ready":"True"
	I0317 11:05:11.347728  261225 pod_ready.go:82] duration metric: took 185.808929ms for pod "kube-proxy-sr64l" in "kube-system" namespace to be "Ready" ...
	I0317 11:05:11.347737  261225 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-kindnet-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:05:11.748058  261225 pod_ready.go:93] pod "kube-scheduler-kindnet-236437" in "kube-system" namespace has status "Ready":"True"
	I0317 11:05:11.748080  261225 pod_ready.go:82] duration metric: took 400.336874ms for pod "kube-scheduler-kindnet-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:05:11.748088  261225 pod_ready.go:39] duration metric: took 4m0.610767407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:05:11.748109  261225 api_server.go:52] waiting for apiserver process to appear ...
	I0317 11:05:11.748151  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:05:11.748204  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:05:11.782166  261225 cri.go:89] found id: "8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5"
	I0317 11:05:11.782194  261225 cri.go:89] found id: ""
	I0317 11:05:11.782202  261225 logs.go:282] 1 containers: [8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5]
	I0317 11:05:11.782250  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:11.785774  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:05:11.785828  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:05:11.818679  261225 cri.go:89] found id: "23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9"
	I0317 11:05:11.818709  261225 cri.go:89] found id: ""
	I0317 11:05:11.818718  261225 logs.go:282] 1 containers: [23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9]
	I0317 11:05:11.818773  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:11.822242  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:05:11.822313  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:05:11.855724  261225 cri.go:89] found id: ""
	I0317 11:05:11.855749  261225 logs.go:282] 0 containers: []
	W0317 11:05:11.855757  261225 logs.go:284] No container was found matching "coredns"
	I0317 11:05:11.855762  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:05:11.855840  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:05:11.889868  261225 cri.go:89] found id: "e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997"
	I0317 11:05:11.889895  261225 cri.go:89] found id: ""
	I0317 11:05:11.889905  261225 logs.go:282] 1 containers: [e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997]
	I0317 11:05:11.889968  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:11.893455  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:05:11.893528  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:05:11.930185  261225 cri.go:89] found id: "97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7"
	I0317 11:05:11.930215  261225 cri.go:89] found id: ""
	I0317 11:05:11.930226  261225 logs.go:282] 1 containers: [97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7]
	I0317 11:05:11.930281  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:11.934085  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:05:11.934163  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:05:11.969461  261225 cri.go:89] found id: "26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405"
	I0317 11:05:11.969486  261225 cri.go:89] found id: ""
	I0317 11:05:11.969495  261225 logs.go:282] 1 containers: [26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405]
	I0317 11:05:11.969554  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:11.973137  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:05:11.973221  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:05:12.007038  261225 cri.go:89] found id: ""
	I0317 11:05:12.007061  261225 logs.go:282] 0 containers: []
	W0317 11:05:12.007068  261225 logs.go:284] No container was found matching "kindnet"
	I0317 11:05:12.007082  261225 logs.go:123] Gathering logs for dmesg ...
	I0317 11:05:12.007094  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:05:12.027405  261225 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:05:12.027439  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:05:12.114815  261225 logs.go:123] Gathering logs for kube-scheduler [e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997] ...
	I0317 11:05:12.114845  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997"
	I0317 11:05:12.157696  261225 logs.go:123] Gathering logs for kube-proxy [97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7] ...
	I0317 11:05:12.157731  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7"
	I0317 11:05:12.195338  261225 logs.go:123] Gathering logs for containerd ...
	I0317 11:05:12.195366  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:05:12.239939  261225 logs.go:123] Gathering logs for kubelet ...
	I0317 11:05:12.239978  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:05:12.332451  261225 logs.go:123] Gathering logs for kube-apiserver [8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5] ...
	I0317 11:05:12.332491  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5"
	I0317 11:05:12.375771  261225 logs.go:123] Gathering logs for etcd [23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9] ...
	I0317 11:05:12.375804  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9"
	I0317 11:05:12.416166  261225 logs.go:123] Gathering logs for kube-controller-manager [26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405] ...
	I0317 11:05:12.416200  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405"
	I0317 11:05:12.467570  261225 logs.go:123] Gathering logs for container status ...
	I0317 11:05:12.467603  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:05:15.008253  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 11:05:15.020269  261225 api_server.go:72] duration metric: took 4m4.739086442s to wait for apiserver process to appear ...
	I0317 11:05:15.020303  261225 api_server.go:88] waiting for apiserver healthz status ...
	I0317 11:05:15.020339  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:05:15.020402  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:05:15.054066  261225 cri.go:89] found id: "8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5"
	I0317 11:05:15.054088  261225 cri.go:89] found id: ""
	I0317 11:05:15.054096  261225 logs.go:282] 1 containers: [8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5]
	I0317 11:05:15.054147  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:15.057724  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:05:15.057783  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:05:15.090544  261225 cri.go:89] found id: "23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9"
	I0317 11:05:15.090565  261225 cri.go:89] found id: ""
	I0317 11:05:15.090572  261225 logs.go:282] 1 containers: [23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9]
	I0317 11:05:15.090614  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:15.094062  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:05:15.094127  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:05:15.132281  261225 cri.go:89] found id: ""
	I0317 11:05:15.132308  261225 logs.go:282] 0 containers: []
	W0317 11:05:15.132319  261225 logs.go:284] No container was found matching "coredns"
	I0317 11:05:15.132327  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:05:15.132383  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:05:15.166781  261225 cri.go:89] found id: "e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997"
	I0317 11:05:15.166825  261225 cri.go:89] found id: ""
	I0317 11:05:15.166835  261225 logs.go:282] 1 containers: [e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997]
	I0317 11:05:15.166893  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:15.170624  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:05:15.170690  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:05:15.203912  261225 cri.go:89] found id: "97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7"
	I0317 11:05:15.203939  261225 cri.go:89] found id: ""
	I0317 11:05:15.203950  261225 logs.go:282] 1 containers: [97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7]
	I0317 11:05:15.204008  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:15.207632  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:05:15.207715  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:05:15.241079  261225 cri.go:89] found id: "26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405"
	I0317 11:05:15.241106  261225 cri.go:89] found id: ""
	I0317 11:05:15.241117  261225 logs.go:282] 1 containers: [26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405]
	I0317 11:05:15.241174  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:15.244691  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:05:15.244758  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:05:15.280054  261225 cri.go:89] found id: ""
	I0317 11:05:15.280078  261225 logs.go:282] 0 containers: []
	W0317 11:05:15.280086  261225 logs.go:284] No container was found matching "kindnet"
	I0317 11:05:15.280099  261225 logs.go:123] Gathering logs for kube-apiserver [8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5] ...
	I0317 11:05:15.280111  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5"
	I0317 11:05:15.321837  261225 logs.go:123] Gathering logs for kube-scheduler [e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997] ...
	I0317 11:05:15.321870  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997"
	I0317 11:05:15.364421  261225 logs.go:123] Gathering logs for kube-proxy [97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7] ...
	I0317 11:05:15.364456  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7"
	I0317 11:05:15.398977  261225 logs.go:123] Gathering logs for kube-controller-manager [26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405] ...
	I0317 11:05:15.399005  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405"
	I0317 11:05:15.449068  261225 logs.go:123] Gathering logs for containerd ...
	I0317 11:05:15.449101  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:05:15.495271  261225 logs.go:123] Gathering logs for kubelet ...
	I0317 11:05:15.495313  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:05:15.584229  261225 logs.go:123] Gathering logs for dmesg ...
	I0317 11:05:15.584269  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:05:15.603621  261225 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:05:15.603651  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:05:15.689841  261225 logs.go:123] Gathering logs for etcd [23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9] ...
	I0317 11:05:15.689875  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9"
	I0317 11:05:15.731335  261225 logs.go:123] Gathering logs for container status ...
	I0317 11:05:15.731369  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:05:18.269898  261225 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0317 11:05:18.274573  261225 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0317 11:05:18.275657  261225 api_server.go:141] control plane version: v1.32.2
	I0317 11:05:18.275685  261225 api_server.go:131] duration metric: took 3.255374368s to wait for apiserver health ...
	I0317 11:05:18.275696  261225 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 11:05:18.275723  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:05:18.275782  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:05:18.308555  261225 cri.go:89] found id: "8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5"
	I0317 11:05:18.308574  261225 cri.go:89] found id: ""
	I0317 11:05:18.308581  261225 logs.go:282] 1 containers: [8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5]
	I0317 11:05:18.308628  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:18.311845  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:05:18.311901  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:05:18.344040  261225 cri.go:89] found id: "23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9"
	I0317 11:05:18.344062  261225 cri.go:89] found id: ""
	I0317 11:05:18.344079  261225 logs.go:282] 1 containers: [23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9]
	I0317 11:05:18.344138  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:18.347489  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:05:18.347549  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:05:18.382251  261225 cri.go:89] found id: ""
	I0317 11:05:18.382272  261225 logs.go:282] 0 containers: []
	W0317 11:05:18.382280  261225 logs.go:284] No container was found matching "coredns"
	I0317 11:05:18.382286  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:05:18.382340  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:05:18.416712  261225 cri.go:89] found id: "e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997"
	I0317 11:05:18.416729  261225 cri.go:89] found id: ""
	I0317 11:05:18.416736  261225 logs.go:282] 1 containers: [e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997]
	I0317 11:05:18.416777  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:18.420319  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:05:18.420397  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:05:18.454494  261225 cri.go:89] found id: "97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7"
	I0317 11:05:18.454520  261225 cri.go:89] found id: ""
	I0317 11:05:18.454539  261225 logs.go:282] 1 containers: [97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7]
	I0317 11:05:18.454594  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:18.457995  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:05:18.458063  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:05:18.490148  261225 cri.go:89] found id: "26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405"
	I0317 11:05:18.490167  261225 cri.go:89] found id: ""
	I0317 11:05:18.490174  261225 logs.go:282] 1 containers: [26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405]
	I0317 11:05:18.490225  261225 ssh_runner.go:195] Run: which crictl
	I0317 11:05:18.493459  261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:05:18.493515  261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:05:18.525609  261225 cri.go:89] found id: ""
	I0317 11:05:18.525633  261225 logs.go:282] 0 containers: []
	W0317 11:05:18.525644  261225 logs.go:284] No container was found matching "kindnet"
	I0317 11:05:18.525661  261225 logs.go:123] Gathering logs for kubelet ...
	I0317 11:05:18.525676  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:05:18.611130  261225 logs.go:123] Gathering logs for dmesg ...
	I0317 11:05:18.611164  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:05:18.629424  261225 logs.go:123] Gathering logs for kube-apiserver [8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5] ...
	I0317 11:05:18.629451  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5"
	I0317 11:05:18.668784  261225 logs.go:123] Gathering logs for kube-scheduler [e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997] ...
	I0317 11:05:18.668814  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997"
	I0317 11:05:18.707925  261225 logs.go:123] Gathering logs for kube-proxy [97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7] ...
	I0317 11:05:18.707953  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7"
	I0317 11:05:18.745255  261225 logs.go:123] Gathering logs for kube-controller-manager [26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405] ...
	I0317 11:05:18.745282  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405"
	I0317 11:05:18.792139  261225 logs.go:123] Gathering logs for containerd ...
	I0317 11:05:18.792168  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:05:18.837395  261225 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:05:18.837426  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:05:18.927307  261225 logs.go:123] Gathering logs for etcd [23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9] ...
	I0317 11:05:18.927334  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9"
	I0317 11:05:18.970538  261225 logs.go:123] Gathering logs for container status ...
	I0317 11:05:18.970572  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:05:21.510656  261225 system_pods.go:59] 8 kube-system pods found
	I0317 11:05:21.510691  261225 system_pods.go:61] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:21.510697  261225 system_pods.go:61] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:21.510704  261225 system_pods.go:61] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:21.510711  261225 system_pods.go:61] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:21.510715  261225 system_pods.go:61] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:21.510718  261225 system_pods.go:61] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:21.510722  261225 system_pods.go:61] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:21.510725  261225 system_pods.go:61] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:21.510731  261225 system_pods.go:74] duration metric: took 3.235029547s to wait for pod list to return data ...
	I0317 11:05:21.510740  261225 default_sa.go:34] waiting for default service account to be created ...
	I0317 11:05:21.513446  261225 default_sa.go:45] found service account: "default"
	I0317 11:05:21.513476  261225 default_sa.go:55] duration metric: took 2.728168ms for default service account to be created ...
	I0317 11:05:21.513489  261225 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 11:05:21.516171  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:21.516197  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:21.516205  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:21.516212  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:21.516216  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:21.516220  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:21.516223  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:21.516226  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:21.516228  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:21.516246  261225 retry.go:31] will retry after 304.55093ms: missing components: kube-dns
	I0317 11:05:21.824952  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:21.824993  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:21.825002  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:21.825013  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:21.825018  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:21.825022  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:21.825026  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:21.825031  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:21.825036  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:21.825057  261225 retry.go:31] will retry after 301.434218ms: missing components: kube-dns
	I0317 11:05:22.131409  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:22.131455  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:22.131469  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:22.131481  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:22.131487  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:22.131495  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:22.131506  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:22.131511  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:22.131516  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:22.131533  261225 retry.go:31] will retry after 479.197877ms: missing components: kube-dns
	I0317 11:05:22.613878  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:22.613913  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:22.613921  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:22.613929  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:22.613935  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:22.613941  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:22.613946  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:22.613953  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:22.613958  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:22.613976  261225 retry.go:31] will retry after 442.216978ms: missing components: kube-dns
	I0317 11:05:23.059458  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:23.059488  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:23.059494  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:23.059501  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:23.059506  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:23.059512  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:23.059517  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:23.059522  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:23.059530  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:23.059547  261225 retry.go:31] will retry after 657.88959ms: missing components: kube-dns
	I0317 11:05:23.721630  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:23.721665  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:23.721673  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:23.721681  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:23.721687  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:23.721693  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:23.721698  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:23.721703  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:23.721712  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:23.721731  261225 retry.go:31] will retry after 610.04653ms: missing components: kube-dns
	I0317 11:05:24.335549  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:24.335592  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:24.335603  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:24.335612  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:24.335616  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:24.335623  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:24.335630  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:24.335640  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:24.335647  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:24.335663  261225 retry.go:31] will retry after 985.298595ms: missing components: kube-dns
	I0317 11:05:25.325186  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:25.325217  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:25.325223  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:25.325230  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:25.325234  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:25.325238  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:25.325241  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:25.325244  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:25.325247  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:25.325259  261225 retry.go:31] will retry after 980.725261ms: missing components: kube-dns
	I0317 11:05:26.309421  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:26.309457  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:26.309465  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:26.309475  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:26.309483  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:26.309494  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:26.309505  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:26.309512  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:26.309518  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:26.309537  261225 retry.go:31] will retry after 1.123138561s: missing components: kube-dns
	I0317 11:05:27.436613  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:27.436643  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:27.436649  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:27.436657  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:27.436662  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:27.436668  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:27.436674  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:27.436679  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:27.436684  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:27.436702  261225 retry.go:31] will retry after 1.57268651s: missing components: kube-dns
	I0317 11:05:29.012826  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:29.012864  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:29.012872  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:29.012882  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:29.012888  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:29.012894  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:29.012898  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:29.012903  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:29.012908  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:29.012925  261225 retry.go:31] will retry after 2.671867502s: missing components: kube-dns
	I0317 11:05:31.689143  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:31.689181  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:31.689189  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:31.689199  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:31.689205  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:31.689211  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:31.689216  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:31.689222  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:31.689227  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:31.689246  261225 retry.go:31] will retry after 3.255293189s: missing components: kube-dns
	I0317 11:05:34.948821  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:34.948853  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:34.948859  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:34.948866  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:34.948871  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:34.948875  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:34.948878  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:34.948882  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:34.948886  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:34.948899  261225 retry.go:31] will retry after 3.968980109s: missing components: kube-dns
	I0317 11:05:38.922353  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:38.922385  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:38.922391  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:38.922399  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:38.922403  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:38.922407  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:38.922411  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:38.922414  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:38.922418  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:38.922432  261225 retry.go:31] will retry after 4.763605942s: missing components: kube-dns
	I0317 11:05:43.690391  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:43.690433  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:43.690442  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:43.690454  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:43.690461  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:43.690470  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:43.690479  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:43.690487  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:43.690491  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:43.690509  261225 retry.go:31] will retry after 5.467335218s: missing components: kube-dns
	I0317 11:05:49.162254  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:49.162287  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:49.162293  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:49.162300  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:49.162303  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:49.162309  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:49.162312  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:49.162317  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:49.162321  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:49.162334  261225 retry.go:31] will retry after 5.883169741s: missing components: kube-dns
	I0317 11:05:55.050444  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:05:55.050483  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:05:55.050491  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:05:55.050501  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:05:55.050507  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:05:55.050513  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:05:55.050516  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:05:55.050520  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:05:55.050526  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:05:55.050545  261225 retry.go:31] will retry after 9.352777192s: missing components: kube-dns
	I0317 11:06:04.407436  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:06:04.407468  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:06:04.407473  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:06:04.407481  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:06:04.407485  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:06:04.407490  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:06:04.407493  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:06:04.407497  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:06:04.407500  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:06:04.407513  261225 retry.go:31] will retry after 9.592726834s: missing components: kube-dns
	I0317 11:06:14.003835  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:06:14.003876  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:06:14.003884  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:06:14.003894  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:06:14.003897  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:06:14.003902  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:06:14.003905  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:06:14.003908  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:06:14.003911  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:06:14.003926  261225 retry.go:31] will retry after 15.514429293s: missing components: kube-dns
	I0317 11:06:29.522530  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:06:29.522566  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:06:29.522573  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:06:29.522582  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:06:29.522588  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:06:29.522594  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:06:29.522604  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:06:29.522609  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:06:29.522615  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:06:29.522635  261225 retry.go:31] will retry after 19.290967428s: missing components: kube-dns
	I0317 11:06:48.816926  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:06:48.816957  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:06:48.816963  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:06:48.816971  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:06:48.816978  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:06:48.816981  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:06:48.816985  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:06:48.816988  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:06:48.816991  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:06:48.817004  261225 retry.go:31] will retry after 26.212373787s: missing components: kube-dns
	I0317 11:07:15.034531  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:07:15.034568  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:07:15.034577  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:07:15.034588  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:07:15.034594  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:07:15.034600  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:07:15.034608  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:07:15.034619  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:07:15.034624  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:07:15.034641  261225 retry.go:31] will retry after 28.925757751s: missing components: kube-dns
	I0317 11:07:43.964848  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:07:43.964882  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:07:43.964889  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:07:43.964898  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:07:43.964903  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:07:43.964907  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:07:43.964911  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:07:43.964914  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:07:43.964917  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:07:43.964929  261225 retry.go:31] will retry after 31.458446993s: missing components: kube-dns
	I0317 11:08:15.427421  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:08:15.427462  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:08:15.427472  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:08:15.427483  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:08:15.427489  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:08:15.427495  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:08:15.427500  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:08:15.427505  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:08:15.427509  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:08:15.427522  261225 retry.go:31] will retry after 32.96114545s: missing components: kube-dns
	I0317 11:08:48.392505  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:08:48.392536  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:08:48.392542  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:08:48.392549  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:08:48.392555  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:08:48.392561  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:08:48.392566  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:08:48.392571  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:08:48.392579  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:08:48.392597  261225 retry.go:31] will retry after 40.97829734s: missing components: kube-dns
	I0317 11:09:29.375227  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:09:29.375311  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:09:29.375319  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:09:29.375329  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:09:29.375333  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:09:29.375338  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:09:29.375341  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:09:29.375346  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:09:29.375352  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:09:29.375372  261225 retry.go:31] will retry after 46.695170565s: missing components: kube-dns
	I0317 11:10:16.074929  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:10:16.074966  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:10:16.074972  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:10:16.074979  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:10:16.074983  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:10:16.074987  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:10:16.074990  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:10:16.074994  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:10:16.074997  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:10:16.075012  261225 retry.go:31] will retry after 45.623120609s: missing components: kube-dns
	I0317 11:11:01.701666  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:11:01.701704  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:11:01.701711  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:11:01.701719  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:11:01.701723  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:11:01.701727  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:11:01.701730  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:11:01.701733  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:11:01.701736  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:11:01.701750  261225 retry.go:31] will retry after 50.550006784s: missing components: kube-dns
	I0317 11:11:52.256350  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:11:52.256393  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:11:52.256403  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:11:52.256416  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:11:52.256423  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:11:52.256429  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:11:52.256436  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:11:52.256441  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:11:52.256449  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:11:52.256468  261225 retry.go:31] will retry after 1m1.522198563s: missing components: kube-dns
	I0317 11:12:53.783038  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:12:53.783078  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:12:53.783087  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:12:53.783094  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:12:53.783097  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:12:53.783101  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:12:53.783104  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:12:53.783109  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:12:53.783112  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:12:53.783125  261225 retry.go:31] will retry after 1m3.642236269s: missing components: kube-dns
	I0317 11:13:57.430062  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:13:57.430097  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:13:57.430103  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:13:57.430113  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:13:57.430118  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:13:57.430122  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:13:57.430126  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:13:57.430129  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:13:57.430133  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:13:57.430145  261225 retry.go:31] will retry after 1m3.788794867s: missing components: kube-dns
	I0317 11:15:01.222682  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:15:01.222735  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:15:01.222747  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:15:01.222758  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:15:01.222765  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:15:01.222772  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:15:01.222778  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:15:01.222784  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:15:01.222790  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:15:01.222812  261225 retry.go:31] will retry after 52.970620414s: missing components: kube-dns
	I0317 11:15:54.197710  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:15:54.197744  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:15:54.197752  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:15:54.197760  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:15:54.197764  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:15:54.197768  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:15:54.197771  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:15:54.197774  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:15:54.197778  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:15:54.197792  261225 retry.go:31] will retry after 1m11.766722955s: missing components: kube-dns
	I0317 11:17:05.970766  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:17:05.970802  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:17:05.970808  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:17:05.970815  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:17:05.970821  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:17:05.970827  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:17:05.970831  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:17:05.970834  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:17:05.970837  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:17:05.970863  261225 retry.go:31] will retry after 59.737236828s: missing components: kube-dns
	I0317 11:18:05.714126  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:18:05.714166  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:18:05.714173  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:18:05.714179  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:18:05.714183  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:18:05.714187  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:18:05.714191  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:18:05.714194  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:18:05.714197  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:18:05.714213  261225 retry.go:31] will retry after 51.728844516s: missing components: kube-dns
	I0317 11:18:57.447141  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:18:57.447181  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:18:57.447189  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:18:57.447200  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:18:57.447205  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:18:57.447212  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:18:57.447218  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:18:57.447224  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:18:57.447230  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:18:57.447294  261225 retry.go:31] will retry after 57.134539687s: missing components: kube-dns
	I0317 11:19:54.586839  261225 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:54.586883  261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:54.586890  261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
	I0317 11:19:54.586899  261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:54.586904  261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
	I0317 11:19:54.586908  261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
	I0317 11:19:54.586912  261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
	I0317 11:19:54.586959  261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
	I0317 11:19:54.586970  261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
	I0317 11:19:54.589228  261225 out.go:201] 
	W0317 11:19:54.590706  261225 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W0317 11:19:54.590729  261225 out.go:270] * 
	* 
	W0317 11:19:54.591640  261225 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0317 11:19:54.592663  261225 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (1147.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (1621.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-236437 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0317 11:04:14.789230   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:04:44.177529   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:07:17.861305   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-236437 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: exit status 80 (27m1.010411345s)

                                                
                                                
-- stdout --
	* [calico-236437] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20535
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "calico-236437" primary control-plane node in "calico-236437" cluster
	* Pulling base image v0.0.46-1741860993-20523 ...
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	* Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 11:02:24.880858  271403 out.go:345] Setting OutFile to fd 1 ...
	I0317 11:02:24.881135  271403 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:02:24.881147  271403 out.go:358] Setting ErrFile to fd 2...
	I0317 11:02:24.881151  271403 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:02:24.881334  271403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
	I0317 11:02:24.882486  271403 out.go:352] Setting JSON to false
	I0317 11:02:24.884073  271403 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2638,"bootTime":1742206707,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 11:02:24.884163  271403 start.go:139] virtualization: kvm guest
	I0317 11:02:24.885681  271403 out.go:177] * [calico-236437] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 11:02:24.887539  271403 out.go:177]   - MINIKUBE_LOCATION=20535
	I0317 11:02:24.887565  271403 notify.go:220] Checking for updates...
	I0317 11:02:24.889529  271403 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 11:02:24.890553  271403 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 11:02:24.891476  271403 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	I0317 11:02:24.892387  271403 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 11:02:24.893262  271403 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 11:02:24.894457  271403 config.go:182] Loaded profile config "auto-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:02:24.894580  271403 config.go:182] Loaded profile config "kindnet-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:02:24.894677  271403 config.go:182] Loaded profile config "pause-507725": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:02:24.894762  271403 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 11:02:24.918017  271403 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 11:02:24.918114  271403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 11:02:24.969860  271403 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-03-17 11:02:24.960688592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 11:02:24.969970  271403 docker.go:318] overlay module found
	I0317 11:02:24.971694  271403 out.go:177] * Using the docker driver based on user configuration
	I0317 11:02:24.972796  271403 start.go:297] selected driver: docker
	I0317 11:02:24.972809  271403 start.go:901] validating driver "docker" against <nil>
	I0317 11:02:24.972827  271403 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 11:02:24.973657  271403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 11:02:25.022032  271403 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-03-17 11:02:25.012636564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 11:02:25.022160  271403 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 11:02:25.022392  271403 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 11:02:25.023911  271403 out.go:177] * Using Docker driver with root privileges
	I0317 11:02:25.024881  271403 cni.go:84] Creating CNI manager for "calico"
	I0317 11:02:25.024899  271403 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0317 11:02:25.024977  271403 start.go:340] cluster config:
	{Name:calico-236437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:calico-236437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:02:25.026106  271403 out.go:177] * Starting "calico-236437" primary control-plane node in "calico-236437" cluster
	I0317 11:02:25.027136  271403 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0317 11:02:25.028276  271403 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0317 11:02:25.029237  271403 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:02:25.029286  271403 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0317 11:02:25.029305  271403 cache.go:56] Caching tarball of preloaded images
	I0317 11:02:25.029318  271403 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0317 11:02:25.029388  271403 preload.go:172] Found /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 11:02:25.029403  271403 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0317 11:02:25.029535  271403 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/config.json ...
	I0317 11:02:25.029562  271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/config.json: {Name:mka28e5f5151a7bb8665b9fadb1eddd447540b75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:02:25.050614  271403 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0317 11:02:25.050633  271403 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0317 11:02:25.050647  271403 cache.go:230] Successfully downloaded all kic artifacts
	I0317 11:02:25.050674  271403 start.go:360] acquireMachinesLock for calico-236437: {Name:mka22ede0df163978b69124089e295c5c09c2417 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 11:02:25.050757  271403 start.go:364] duration metric: took 70.02µs to acquireMachinesLock for "calico-236437"
	I0317 11:02:25.050781  271403 start.go:93] Provisioning new machine with config: &{Name:calico-236437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:calico-236437 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 11:02:25.050872  271403 start.go:125] createHost starting for "" (driver="docker")
	I0317 11:02:25.052899  271403 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0317 11:02:25.053169  271403 start.go:159] libmachine.API.Create for "calico-236437" (driver="docker")
	I0317 11:02:25.053195  271403 client.go:168] LocalClient.Create starting
	I0317 11:02:25.053249  271403 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem
	I0317 11:02:25.053279  271403 main.go:141] libmachine: Decoding PEM data...
	I0317 11:02:25.053293  271403 main.go:141] libmachine: Parsing certificate...
	I0317 11:02:25.053336  271403 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem
	I0317 11:02:25.053354  271403 main.go:141] libmachine: Decoding PEM data...
	I0317 11:02:25.053364  271403 main.go:141] libmachine: Parsing certificate...
	I0317 11:02:25.053671  271403 cli_runner.go:164] Run: docker network inspect calico-236437 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0317 11:02:25.069801  271403 cli_runner.go:211] docker network inspect calico-236437 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0317 11:02:25.069854  271403 network_create.go:284] running [docker network inspect calico-236437] to gather additional debugging logs...
	I0317 11:02:25.069871  271403 cli_runner.go:164] Run: docker network inspect calico-236437
	W0317 11:02:25.086515  271403 cli_runner.go:211] docker network inspect calico-236437 returned with exit code 1
	I0317 11:02:25.086545  271403 network_create.go:287] error running [docker network inspect calico-236437]: docker network inspect calico-236437: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-236437 not found
	I0317 11:02:25.086566  271403 network_create.go:289] output of [docker network inspect calico-236437]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-236437 not found
	
	** /stderr **
	I0317 11:02:25.086714  271403 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 11:02:25.103494  271403 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6a2ef9d4bc68 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9a:4d:91:26:57:2c} reservation:<nil>}
	I0317 11:02:25.104219  271403 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-00bf62ef0133 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:c5:34:86:d6:21} reservation:<nil>}
	I0317 11:02:25.104910  271403 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-81e0001ceae7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:6a:cf:1c:79:e6} reservation:<nil>}
	I0317 11:02:25.105515  271403 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-16edb2a113e3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d6:59:06:a9:a8:e8} reservation:<nil>}
	I0317 11:02:25.106325  271403 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d7f060}
	I0317 11:02:25.106346  271403 network_create.go:124] attempt to create docker network calico-236437 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0317 11:02:25.106383  271403 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-236437 calico-236437
	I0317 11:02:25.157870  271403 network_create.go:108] docker network calico-236437 192.168.85.0/24 created
	I0317 11:02:25.157905  271403 kic.go:121] calculated static IP "192.168.85.2" for the "calico-236437" container
	I0317 11:02:25.157997  271403 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0317 11:02:25.175038  271403 cli_runner.go:164] Run: docker volume create calico-236437 --label name.minikube.sigs.k8s.io=calico-236437 --label created_by.minikube.sigs.k8s.io=true
	I0317 11:02:25.193023  271403 oci.go:103] Successfully created a docker volume calico-236437
	I0317 11:02:25.193103  271403 cli_runner.go:164] Run: docker run --rm --name calico-236437-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-236437 --entrypoint /usr/bin/test -v calico-236437:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0317 11:02:25.607335  271403 oci.go:107] Successfully prepared a docker volume calico-236437
	I0317 11:02:25.607382  271403 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:02:25.607404  271403 kic.go:194] Starting extracting preloaded images to volume ...
	I0317 11:02:25.607460  271403 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-236437:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0317 11:02:30.089006  271403 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-236437:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.481483792s)
	I0317 11:02:30.089037  271403 kic.go:203] duration metric: took 4.481630761s to extract preloaded images to volume ...
	W0317 11:02:30.089153  271403 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0317 11:02:30.089236  271403 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0317 11:02:30.143191  271403 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-236437 --name calico-236437 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-236437 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-236437 --network calico-236437 --ip 192.168.85.2 --volume calico-236437:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0317 11:02:30.402985  271403 cli_runner.go:164] Run: docker container inspect calico-236437 --format={{.State.Running}}
	I0317 11:02:30.421737  271403 cli_runner.go:164] Run: docker container inspect calico-236437 --format={{.State.Status}}
	I0317 11:02:30.443380  271403 cli_runner.go:164] Run: docker exec calico-236437 stat /var/lib/dpkg/alternatives/iptables
	I0317 11:02:30.487803  271403 oci.go:144] the created container "calico-236437" has a running status.
	I0317 11:02:30.487842  271403 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa...
	I0317 11:02:30.966099  271403 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0317 11:02:30.989095  271403 cli_runner.go:164] Run: docker container inspect calico-236437 --format={{.State.Status}}
	I0317 11:02:31.006629  271403 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0317 11:02:31.006654  271403 kic_runner.go:114] Args: [docker exec --privileged calico-236437 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0317 11:02:31.052822  271403 cli_runner.go:164] Run: docker container inspect calico-236437 --format={{.State.Status}}
	I0317 11:02:31.073514  271403 machine.go:93] provisionDockerMachine start ...
	I0317 11:02:31.073608  271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
	I0317 11:02:31.091435  271403 main.go:141] libmachine: Using SSH client type: native
	I0317 11:02:31.091672  271403 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I0317 11:02:31.091683  271403 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 11:02:31.230753  271403 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-236437
	
	I0317 11:02:31.230782  271403 ubuntu.go:169] provisioning hostname "calico-236437"
	I0317 11:02:31.230855  271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
	I0317 11:02:31.248577  271403 main.go:141] libmachine: Using SSH client type: native
	I0317 11:02:31.248869  271403 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I0317 11:02:31.248892  271403 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-236437 && echo "calico-236437" | sudo tee /etc/hostname
	I0317 11:02:31.389908  271403 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-236437
	
	I0317 11:02:31.390001  271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
	I0317 11:02:31.407223  271403 main.go:141] libmachine: Using SSH client type: native
	I0317 11:02:31.407517  271403 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I0317 11:02:31.407545  271403 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-236437' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-236437/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-236437' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 11:02:31.543474  271403 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 11:02:31.543500  271403 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20535-4918/.minikube CaCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20535-4918/.minikube}
	I0317 11:02:31.543521  271403 ubuntu.go:177] setting up certificates
	I0317 11:02:31.543534  271403 provision.go:84] configureAuth start
	I0317 11:02:31.543589  271403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-236437
	I0317 11:02:31.561231  271403 provision.go:143] copyHostCerts
	I0317 11:02:31.561284  271403 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem, removing ...
	I0317 11:02:31.561292  271403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem
	I0317 11:02:31.561354  271403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem (1082 bytes)
	I0317 11:02:31.561446  271403 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem, removing ...
	I0317 11:02:31.561454  271403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem
	I0317 11:02:31.561478  271403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem (1123 bytes)
	I0317 11:02:31.561530  271403 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem, removing ...
	I0317 11:02:31.561537  271403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem
	I0317 11:02:31.561562  271403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem (1679 bytes)
	I0317 11:02:31.561607  271403 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem org=jenkins.calico-236437 san=[127.0.0.1 192.168.85.2 calico-236437 localhost minikube]
	I0317 11:02:31.992225  271403 provision.go:177] copyRemoteCerts
	I0317 11:02:31.992284  271403 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 11:02:31.992319  271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
	I0317 11:02:32.009677  271403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa Username:docker}
	I0317 11:02:32.104042  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 11:02:32.126981  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0317 11:02:32.149635  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0317 11:02:32.172473  271403 provision.go:87] duration metric: took 628.925048ms to configureAuth
	I0317 11:02:32.172509  271403 ubuntu.go:193] setting minikube options for container-runtime
	I0317 11:02:32.172673  271403 config.go:182] Loaded profile config "calico-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:02:32.172685  271403 machine.go:96] duration metric: took 1.099153553s to provisionDockerMachine
	I0317 11:02:32.172692  271403 client.go:171] duration metric: took 7.119491835s to LocalClient.Create
	I0317 11:02:32.172711  271403 start.go:167] duration metric: took 7.119541902s to libmachine.API.Create "calico-236437"
	I0317 11:02:32.172723  271403 start.go:293] postStartSetup for "calico-236437" (driver="docker")
	I0317 11:02:32.172734  271403 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 11:02:32.172782  271403 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 11:02:32.172832  271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
	I0317 11:02:32.189861  271403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa Username:docker}
	I0317 11:02:32.284036  271403 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 11:02:32.287202  271403 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0317 11:02:32.287240  271403 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0317 11:02:32.287285  271403 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0317 11:02:32.287295  271403 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0317 11:02:32.287311  271403 filesync.go:126] Scanning /home/jenkins/minikube-integration/20535-4918/.minikube/addons for local assets ...
	I0317 11:02:32.287361  271403 filesync.go:126] Scanning /home/jenkins/minikube-integration/20535-4918/.minikube/files for local assets ...
	I0317 11:02:32.287433  271403 filesync.go:149] local asset: /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem -> 116902.pem in /etc/ssl/certs
	I0317 11:02:32.287518  271403 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 11:02:32.295619  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem --> /etc/ssl/certs/116902.pem (1708 bytes)
	I0317 11:02:32.317674  271403 start.go:296] duration metric: took 144.936846ms for postStartSetup
	I0317 11:02:32.318040  271403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-236437
	I0317 11:02:32.335236  271403 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/config.json ...
	I0317 11:02:32.335512  271403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 11:02:32.335547  271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
	I0317 11:02:32.351723  271403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa Username:docker}
	I0317 11:02:32.444147  271403 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0317 11:02:32.448601  271403 start.go:128] duration metric: took 7.397705312s to createHost
	I0317 11:02:32.448627  271403 start.go:83] releasing machines lock for "calico-236437", held for 7.39785815s
	I0317 11:02:32.448708  271403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-236437
	I0317 11:02:32.467676  271403 ssh_runner.go:195] Run: cat /version.json
	I0317 11:02:32.467727  271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
	I0317 11:02:32.467758  271403 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 11:02:32.467811  271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
	I0317 11:02:32.485718  271403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa Username:docker}
	I0317 11:02:32.485824  271403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa Username:docker}
	I0317 11:02:32.657328  271403 ssh_runner.go:195] Run: systemctl --version
	I0317 11:02:32.661411  271403 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0317 11:02:32.665794  271403 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0317 11:02:32.689140  271403 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0317 11:02:32.689229  271403 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 11:02:32.714533  271403 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0317 11:02:32.714561  271403 start.go:495] detecting cgroup driver to use...
	I0317 11:02:32.714602  271403 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0317 11:02:32.714651  271403 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 11:02:32.726430  271403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 11:02:32.736704  271403 docker.go:217] disabling cri-docker service (if available) ...
	I0317 11:02:32.736750  271403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 11:02:32.749237  271403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 11:02:32.762021  271403 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 11:02:32.837408  271403 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 11:02:32.915411  271403 docker.go:233] disabling docker service ...
	I0317 11:02:32.915475  271403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 11:02:32.934753  271403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 11:02:32.945339  271403 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 11:02:33.026602  271403 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 11:02:33.105023  271403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 11:02:33.115410  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 11:02:33.130129  271403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 11:02:33.139140  271403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 11:02:33.148241  271403 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 11:02:33.148304  271403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 11:02:33.156976  271403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:02:33.165716  271403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 11:02:33.174440  271403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:02:33.183153  271403 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 11:02:33.191608  271403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 11:02:33.200222  271403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 11:02:33.208828  271403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 11:02:33.217773  271403 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 11:02:33.225411  271403 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 11:02:33.233211  271403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:02:33.313024  271403 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 11:02:33.412133  271403 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0317 11:02:33.412208  271403 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0317 11:02:33.415675  271403 start.go:563] Will wait 60s for crictl version
	I0317 11:02:33.415723  271403 ssh_runner.go:195] Run: which crictl
	I0317 11:02:33.418802  271403 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 11:02:33.454942  271403 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.25
	RuntimeApiVersion:  v1
	I0317 11:02:33.455012  271403 ssh_runner.go:195] Run: containerd --version
	I0317 11:02:33.477807  271403 ssh_runner.go:195] Run: containerd --version
	I0317 11:02:33.501834  271403 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
	I0317 11:02:33.502865  271403 cli_runner.go:164] Run: docker network inspect calico-236437 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 11:02:33.521053  271403 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0317 11:02:33.524629  271403 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:02:33.535881  271403 kubeadm.go:883] updating cluster {Name:calico-236437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:calico-236437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 11:02:33.536009  271403 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:02:33.536072  271403 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 11:02:33.567514  271403 containerd.go:627] all images are preloaded for containerd runtime.
	I0317 11:02:33.567533  271403 containerd.go:534] Images already preloaded, skipping extraction
	I0317 11:02:33.567587  271403 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 11:02:33.598171  271403 containerd.go:627] all images are preloaded for containerd runtime.
	I0317 11:02:33.598192  271403 cache_images.go:84] Images are preloaded, skipping loading
	I0317 11:02:33.598199  271403 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.2 containerd true true} ...
	I0317 11:02:33.598293  271403 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-236437 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:calico-236437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0317 11:02:33.598353  271403 ssh_runner.go:195] Run: sudo crictl info
	I0317 11:02:33.630316  271403 cni.go:84] Creating CNI manager for "calico"
	I0317 11:02:33.630339  271403 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 11:02:33.630359  271403 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-236437 NodeName:calico-236437 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 11:02:33.630477  271403 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "calico-236437"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 11:02:33.630528  271403 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 11:02:33.638862  271403 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 11:02:33.638928  271403 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 11:02:33.647870  271403 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0317 11:02:33.664419  271403 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 11:02:33.680721  271403 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2303 bytes)
	I0317 11:02:33.697486  271403 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0317 11:02:33.700806  271403 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:02:33.710885  271403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:02:33.789041  271403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:02:33.801846  271403 certs.go:68] Setting up /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437 for IP: 192.168.85.2
	I0317 11:02:33.801877  271403 certs.go:194] generating shared ca certs ...
	I0317 11:02:33.801896  271403 certs.go:226] acquiring lock for ca certs: {Name:mkf58624c63680e02907d28348d45986283847c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:02:33.802058  271403 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20535-4918/.minikube/ca.key
	I0317 11:02:33.802123  271403 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.key
	I0317 11:02:33.802137  271403 certs.go:256] generating profile certs ...
	I0317 11:02:33.802202  271403 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/client.key
	I0317 11:02:33.802228  271403 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/client.crt with IP's: []
	I0317 11:02:33.992607  271403 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/client.crt ...
	I0317 11:02:33.992636  271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/client.crt: {Name:mkb52ca2b7d5614e9a99d0baa0ecbebaddb0cc98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:02:33.992801  271403 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/client.key ...
	I0317 11:02:33.992819  271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/client.key: {Name:mk35db6f772b5eb0d0f9eef0f32d9e01b2c6129c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:02:33.992895  271403 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.key.916c13d4
	I0317 11:02:33.992909  271403 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.crt.916c13d4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0317 11:02:34.206081  271403 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.crt.916c13d4 ...
	I0317 11:02:34.206116  271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.crt.916c13d4: {Name:mk106a12a3266907a0c64fdec49d2d65cff8ef4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:02:34.206307  271403 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.key.916c13d4 ...
	I0317 11:02:34.206328  271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.key.916c13d4: {Name:mkb761c01ac7dd169e99815f4912e839650faba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:02:34.206446  271403 certs.go:381] copying /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.crt.916c13d4 -> /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.crt
	I0317 11:02:34.206543  271403 certs.go:385] copying /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.key.916c13d4 -> /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.key
	I0317 11:02:34.206635  271403 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.key
	I0317 11:02:34.206657  271403 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.crt with IP's: []
	I0317 11:02:34.324068  271403 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.crt ...
	I0317 11:02:34.324097  271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.crt: {Name:mk823c22b3bc8a80bc3c82b282af79b6abc16d96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:02:34.324254  271403 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.key ...
	I0317 11:02:34.324267  271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.key: {Name:mk875be3f1f3630e7e6086d3ef46f0bec9649fb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:02:34.324420  271403 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690.pem (1338 bytes)
	W0317 11:02:34.324451  271403 certs.go:480] ignoring /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690_empty.pem, impossibly tiny 0 bytes
	I0317 11:02:34.324461  271403 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 11:02:34.324494  271403 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem (1082 bytes)
	I0317 11:02:34.324524  271403 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem (1123 bytes)
	I0317 11:02:34.324558  271403 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem (1679 bytes)
	I0317 11:02:34.324619  271403 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem (1708 bytes)
	I0317 11:02:34.325244  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 11:02:34.348013  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 11:02:34.369328  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 11:02:34.391242  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 11:02:34.413233  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0317 11:02:34.434100  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 11:02:34.458186  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 11:02:34.481676  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0317 11:02:34.505221  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 11:02:34.527325  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690.pem --> /usr/share/ca-certificates/11690.pem (1338 bytes)
	I0317 11:02:34.551519  271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem --> /usr/share/ca-certificates/116902.pem (1708 bytes)
	I0317 11:02:34.572901  271403 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 11:02:34.588811  271403 ssh_runner.go:195] Run: openssl version
	I0317 11:02:34.593841  271403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11690.pem && ln -fs /usr/share/ca-certificates/11690.pem /etc/ssl/certs/11690.pem"
	I0317 11:02:34.602126  271403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11690.pem
	I0317 11:02:34.605246  271403 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:32 /usr/share/ca-certificates/11690.pem
	I0317 11:02:34.605299  271403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11690.pem
	I0317 11:02:34.611760  271403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11690.pem /etc/ssl/certs/51391683.0"
	I0317 11:02:34.619902  271403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116902.pem && ln -fs /usr/share/ca-certificates/116902.pem /etc/ssl/certs/116902.pem"
	I0317 11:02:34.627931  271403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116902.pem
	I0317 11:02:34.631011  271403 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:32 /usr/share/ca-certificates/116902.pem
	I0317 11:02:34.631053  271403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116902.pem
	I0317 11:02:34.637206  271403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116902.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 11:02:34.646079  271403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 11:02:34.654752  271403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:02:34.657906  271403 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:02:34.657954  271403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:02:34.664388  271403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 11:02:34.673111  271403 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 11:02:34.676159  271403 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 11:02:34.676200  271403 kubeadm.go:392] StartCluster: {Name:calico-236437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:calico-236437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:02:34.676252  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0317 11:02:34.676286  271403 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 11:02:34.710371  271403 cri.go:89] found id: ""
	I0317 11:02:34.710443  271403 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 11:02:34.720254  271403 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 11:02:34.728439  271403 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0317 11:02:34.728511  271403 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 11:02:34.736684  271403 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 11:02:34.736699  271403 kubeadm.go:157] found existing configuration files:
	
	I0317 11:02:34.736730  271403 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 11:02:34.744549  271403 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 11:02:34.744604  271403 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 11:02:34.752129  271403 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 11:02:34.760012  271403 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 11:02:34.760069  271403 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 11:02:34.767476  271403 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 11:02:34.775057  271403 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 11:02:34.775105  271403 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 11:02:34.782810  271403 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 11:02:34.790578  271403 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 11:02:34.790624  271403 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 11:02:34.797888  271403 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0317 11:02:34.833333  271403 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 11:02:34.833405  271403 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 11:02:34.849583  271403 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0317 11:02:34.849687  271403 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0317 11:02:34.849745  271403 kubeadm.go:310] OS: Linux
	I0317 11:02:34.849817  271403 kubeadm.go:310] CGROUPS_CPU: enabled
	I0317 11:02:34.849899  271403 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0317 11:02:34.849997  271403 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0317 11:02:34.850078  271403 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0317 11:02:34.850154  271403 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0317 11:02:34.850217  271403 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0317 11:02:34.850265  271403 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0317 11:02:34.850312  271403 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0317 11:02:34.850353  271403 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0317 11:02:34.904813  271403 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 11:02:34.904974  271403 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 11:02:34.905103  271403 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 11:02:34.909905  271403 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 11:02:34.911531  271403 out.go:235]   - Generating certificates and keys ...
	I0317 11:02:34.911635  271403 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 11:02:34.911736  271403 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 11:02:35.268722  271403 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 11:02:35.468484  271403 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 11:02:35.769348  271403 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 11:02:35.993040  271403 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 11:02:36.202807  271403 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 11:02:36.203004  271403 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-236437 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0317 11:02:36.280951  271403 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 11:02:36.281084  271403 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-236437 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0317 11:02:36.463620  271403 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 11:02:36.510242  271403 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 11:02:36.900000  271403 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 11:02:36.900111  271403 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 11:02:37.075436  271403 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 11:02:37.263196  271403 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 11:02:37.642492  271403 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 11:02:37.737086  271403 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 11:02:38.040875  271403 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 11:02:38.041549  271403 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 11:02:38.043872  271403 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 11:02:38.045834  271403 out.go:235]   - Booting up control plane ...
	I0317 11:02:38.045950  271403 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 11:02:38.046019  271403 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 11:02:38.046719  271403 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 11:02:38.056299  271403 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 11:02:38.061457  271403 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 11:02:38.061534  271403 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 11:02:38.143998  271403 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 11:02:38.144138  271403 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 11:02:38.645417  271403 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.431671ms
	I0317 11:02:38.645515  271403 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 11:02:43.147383  271403 kubeadm.go:310] [api-check] The API server is healthy after 4.501934621s
	I0317 11:02:43.158723  271403 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 11:02:43.168464  271403 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 11:02:43.184339  271403 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 11:02:43.184609  271403 kubeadm.go:310] [mark-control-plane] Marking the node calico-236437 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 11:02:43.191081  271403 kubeadm.go:310] [bootstrap-token] Using token: mixhu0.4ggx0rlksl4xdr10
	I0317 11:02:43.192582  271403 out.go:235]   - Configuring RBAC rules ...
	I0317 11:02:43.192739  271403 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 11:02:43.196215  271403 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 11:02:43.200588  271403 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 11:02:43.202942  271403 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 11:02:43.205272  271403 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 11:02:43.207452  271403 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 11:02:43.553368  271403 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 11:02:43.969959  271403 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 11:02:44.553346  271403 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 11:02:44.554242  271403 kubeadm.go:310] 
	I0317 11:02:44.554342  271403 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 11:02:44.554359  271403 kubeadm.go:310] 
	I0317 11:02:44.554471  271403 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 11:02:44.554492  271403 kubeadm.go:310] 
	I0317 11:02:44.554522  271403 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 11:02:44.554611  271403 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 11:02:44.554704  271403 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 11:02:44.554722  271403 kubeadm.go:310] 
	I0317 11:02:44.554806  271403 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 11:02:44.554816  271403 kubeadm.go:310] 
	I0317 11:02:44.554894  271403 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 11:02:44.554903  271403 kubeadm.go:310] 
	I0317 11:02:44.554993  271403 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 11:02:44.555106  271403 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 11:02:44.555207  271403 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 11:02:44.555217  271403 kubeadm.go:310] 
	I0317 11:02:44.555395  271403 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 11:02:44.555506  271403 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 11:02:44.555523  271403 kubeadm.go:310] 
	I0317 11:02:44.555637  271403 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mixhu0.4ggx0rlksl4xdr10 \
	I0317 11:02:44.555775  271403 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fbbd8e832ea7aa08371d4fcc88b71c8e29c98bed7a9a4feed9bf5043f7b52578 \
	I0317 11:02:44.555807  271403 kubeadm.go:310] 	--control-plane 
	I0317 11:02:44.555816  271403 kubeadm.go:310] 
	I0317 11:02:44.555924  271403 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 11:02:44.555932  271403 kubeadm.go:310] 
	I0317 11:02:44.556026  271403 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mixhu0.4ggx0rlksl4xdr10 \
	I0317 11:02:44.556149  271403 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fbbd8e832ea7aa08371d4fcc88b71c8e29c98bed7a9a4feed9bf5043f7b52578 
	I0317 11:02:44.558534  271403 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0317 11:02:44.558760  271403 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0317 11:02:44.558854  271403 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 11:02:44.558879  271403 cni.go:84] Creating CNI manager for "calico"
	I0317 11:02:44.561122  271403 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0317 11:02:44.562673  271403 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0317 11:02:44.562695  271403 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (324369 bytes)
	I0317 11:02:44.581949  271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0317 11:02:45.843315  271403 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.261329329s)
	I0317 11:02:45.843361  271403 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 11:02:45.843456  271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:02:45.843478  271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-236437 minikube.k8s.io/updated_at=2025_03_17T11_02_45_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76 minikube.k8s.io/name=calico-236437 minikube.k8s.io/primary=true
	I0317 11:02:45.850707  271403 ops.go:34] apiserver oom_adj: -16
	I0317 11:02:45.948147  271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:02:46.448502  271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:02:46.949084  271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:02:47.449157  271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:02:47.948285  271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:02:48.448265  271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:02:48.949124  271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:02:49.015125  271403 kubeadm.go:1113] duration metric: took 3.171736497s to wait for elevateKubeSystemPrivileges
	I0317 11:02:49.015169  271403 kubeadm.go:394] duration metric: took 14.338970216s to StartCluster
	I0317 11:02:49.015191  271403 settings.go:142] acquiring lock: {Name:mk2a57d556efff40ccd4336229d7a78216b861f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:02:49.015295  271403 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 11:02:49.016764  271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/kubeconfig: {Name:mk686b9f6159ab958672b945ae0aa5a9c96e9ecc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:02:49.017020  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 11:02:49.017025  271403 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 11:02:49.017094  271403 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 11:02:49.017190  271403 addons.go:69] Setting storage-provisioner=true in profile "calico-236437"
	I0317 11:02:49.017214  271403 addons.go:238] Setting addon storage-provisioner=true in "calico-236437"
	I0317 11:02:49.017235  271403 addons.go:69] Setting default-storageclass=true in profile "calico-236437"
	I0317 11:02:49.017249  271403 config.go:182] Loaded profile config "calico-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:02:49.017263  271403 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-236437"
	I0317 11:02:49.017336  271403 host.go:66] Checking if "calico-236437" exists ...
	I0317 11:02:49.017645  271403 cli_runner.go:164] Run: docker container inspect calico-236437 --format={{.State.Status}}
	I0317 11:02:49.017831  271403 cli_runner.go:164] Run: docker container inspect calico-236437 --format={{.State.Status}}
	I0317 11:02:49.018669  271403 out.go:177] * Verifying Kubernetes components...
	I0317 11:02:49.019970  271403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:02:49.043863  271403 addons.go:238] Setting addon default-storageclass=true in "calico-236437"
	I0317 11:02:49.043916  271403 host.go:66] Checking if "calico-236437" exists ...
	I0317 11:02:49.044307  271403 cli_runner.go:164] Run: docker container inspect calico-236437 --format={{.State.Status}}
	I0317 11:02:49.044516  271403 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:02:49.045642  271403 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 11:02:49.045662  271403 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 11:02:49.045707  271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
	I0317 11:02:49.074641  271403 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 11:02:49.074679  271403 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 11:02:49.074683  271403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa Username:docker}
	I0317 11:02:49.074750  271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
	I0317 11:02:49.092825  271403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa Username:docker}
	I0317 11:02:49.146609  271403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:02:49.146645  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 11:02:49.231557  271403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 11:02:49.512613  271403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 11:02:49.840101  271403 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0317 11:02:49.841256  271403 node_ready.go:35] waiting up to 15m0s for node "calico-236437" to be "Ready" ...
	I0317 11:02:49.904604  271403 node_ready.go:49] node "calico-236437" has status "Ready":"True"
	I0317 11:02:49.904627  271403 node_ready.go:38] duration metric: took 63.34338ms for node "calico-236437" to be "Ready" ...
	I0317 11:02:49.904637  271403 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:02:49.907969  271403 pod_ready.go:79] waiting up to 15m0s for pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace to be "Ready" ...
	I0317 11:02:50.110000  271403 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0317 11:02:50.111234  271403 addons.go:514] duration metric: took 1.094138366s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0317 11:02:50.344618  271403 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-236437" context rescaled to 1 replicas
	I0317 11:02:51.912894  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:53.913540  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:56.413348  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:02:58.912802  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:00.913484  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:03.413309  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:05.912288  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:07.913513  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:10.413225  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:12.912794  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:14.913329  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:17.412933  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:19.912177  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:21.913651  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:24.413065  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:26.413616  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:28.913818  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:31.412984  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:33.413031  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:35.420991  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:37.913715  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:40.412687  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:42.413498  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:44.912999  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:47.412832  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:49.413266  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:51.913217  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:53.913804  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:56.413532  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:03:58.913374  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:01.413655  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:03.413742  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:05.913392  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:08.412709  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:10.412969  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:12.413781  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:14.913146  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:17.413191  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:19.414688  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:21.913011  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:23.913212  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:25.913387  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:27.913564  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:30.412714  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:32.413230  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:34.413482  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:36.912932  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:38.913417  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:40.914920  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:43.412457  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:45.412555  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:47.912953  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:49.913272  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:52.413818  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:54.913131  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:56.913269  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:04:59.413324  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:01.414244  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:03.915020  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:06.413297  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:08.913574  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:11.413836  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:13.914144  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:16.413215  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:18.413764  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:20.913030  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:23.413886  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:25.912638  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:27.913452  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:30.412494  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:32.412901  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:34.912822  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:37.412649  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:39.912457  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:41.912831  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:44.412502  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:46.913439  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:48.913558  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:51.412685  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:53.413783  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:55.913043  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:05:58.412339  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:00.412947  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:02.912700  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:04.913636  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:07.413198  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:09.413596  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:11.912609  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:13.913341  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:16.412593  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:18.913785  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:21.412772  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:23.412952  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:25.912643  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:27.913773  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:30.412571  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:32.412732  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:34.913454  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:37.412555  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:39.412879  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:41.413880  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:43.913261  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:45.913636  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:48.412720  271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:49.912504  271403 pod_ready.go:82] duration metric: took 4m0.004506039s for pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace to be "Ready" ...
	E0317 11:06:49.912527  271403 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0317 11:06:49.912535  271403 pod_ready.go:79] waiting up to 15m0s for pod "calico-node-ks7vr" in "kube-system" namespace to be "Ready" ...
	I0317 11:06:51.918374  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:53.918973  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:56.418241  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:06:58.418488  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:00.918359  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:03.417624  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:05.418024  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:07.918973  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:10.418272  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:12.918795  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:15.418597  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:17.917419  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:19.918049  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:21.918844  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:24.417950  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:26.918233  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:29.417309  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:31.418245  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:33.418716  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:35.918570  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:38.417408  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:40.417958  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:42.918827  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:45.418196  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:47.919273  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:50.417016  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:52.418046  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:54.418176  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:56.918881  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:07:59.417841  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:01.418036  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:03.917623  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:05.918610  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:08.417523  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:10.417892  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:12.418404  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:14.918113  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:17.417020  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:19.417791  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:21.917617  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:23.917816  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:25.917904  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:28.417956  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:30.917837  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:32.918089  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:34.918753  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:37.417237  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:39.919014  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:42.417529  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:44.418297  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:46.918258  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:48.918688  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:51.417307  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:53.418355  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:55.918117  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:57.918410  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:08:59.918449  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:01.918845  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:04.418455  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:06.918355  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:09.418173  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:11.919067  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:14.418715  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:16.917772  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:18.918284  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:20.918661  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:23.417852  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:25.418008  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:27.918388  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:29.919563  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:32.418155  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:34.917986  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:36.918523  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:39.418224  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:41.919967  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:44.417929  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:46.418306  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:48.419044  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:50.918101  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:53.418473  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:55.918250  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:09:57.919201  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:00.418801  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:02.918424  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:04.919296  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:07.418778  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:09.419574  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:11.918280  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:13.919010  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:16.417983  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:18.418238  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:20.918104  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:22.918833  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:25.418024  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:27.419584  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:29.918278  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:32.417856  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:34.419356  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:36.917545  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:38.917961  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:40.918298  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:43.417250  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:45.919123  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:48.417448  271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:49.918229  271403 pod_ready.go:82] duration metric: took 4m0.005680866s for pod "calico-node-ks7vr" in "kube-system" namespace to be "Ready" ...
	E0317 11:10:49.918252  271403 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0317 11:10:49.918259  271403 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-p5ksh" in "kube-system" namespace to be "Ready" ...
	I0317 11:10:49.920278  271403 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-p5ksh" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-p5ksh" not found
	I0317 11:10:49.920297  271403 pod_ready.go:82] duration metric: took 2.032458ms for pod "coredns-668d6bf9bc-p5ksh" in "kube-system" namespace to be "Ready" ...
	E0317 11:10:49.920307  271403 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-p5ksh" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-p5ksh" not found
	I0317 11:10:49.920315  271403 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace to be "Ready" ...
	I0317 11:10:51.925103  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:54.425691  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:56.926042  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:10:59.426026  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:01.926298  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:04.425241  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:06.425427  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:08.426017  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:10.925165  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:12.925565  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:14.925662  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:16.926692  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:19.425358  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:21.426115  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:23.926043  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:26.425821  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:28.925602  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:31.426046  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:33.426139  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:35.427463  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:37.927104  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:40.427197  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:42.926206  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:45.426074  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:47.925946  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:50.424750  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:52.425655  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:54.925631  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:56.925925  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:11:59.425114  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:01.425218  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:03.425985  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:05.927682  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:08.425336  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:10.925110  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:13.426109  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:15.925391  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:17.925727  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:20.425280  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:22.925566  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:24.925903  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:27.425667  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:29.926326  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:32.424897  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:34.425204  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:36.926980  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:39.425391  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:41.925712  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:44.425351  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:46.426040  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:48.926600  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:51.426433  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:53.925658  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:56.425890  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:12:58.925473  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:01.455506  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:03.925013  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:05.925306  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:07.926008  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:10.426095  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:12.925641  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:14.925923  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:16.925963  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:19.426071  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:21.924863  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:23.927507  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:26.425306  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:28.925236  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:31.425227  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:33.426591  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:35.927037  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:38.426295  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:40.926392  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:43.425517  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:45.925611  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:47.926041  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:50.425451  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:52.425503  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:54.925781  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:57.426174  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:13:59.426254  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:01.426705  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:03.926065  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:05.926244  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:08.425356  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:10.425577  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:12.925154  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:14.926165  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:17.426308  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:19.925886  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:22.425229  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:24.426304  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:26.926121  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:29.425890  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:31.925520  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:33.971830  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:36.425957  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:38.925784  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:41.425418  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:43.425970  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:45.926499  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:48.425328  271403 pod_ready.go:103] pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:49.926222  271403 pod_ready.go:82] duration metric: took 4m0.005893745s for pod "coredns-668d6bf9bc-zd9kj" in "kube-system" namespace to be "Ready" ...
	E0317 11:14:49.926245  271403 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0317 11:14:49.926254  271403 pod_ready.go:79] waiting up to 15m0s for pod "etcd-calico-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:14:49.930482  271403 pod_ready.go:93] pod "etcd-calico-236437" in "kube-system" namespace has status "Ready":"True"
	I0317 11:14:49.930499  271403 pod_ready.go:82] duration metric: took 4.237789ms for pod "etcd-calico-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:14:49.930507  271403 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-calico-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:14:49.933937  271403 pod_ready.go:93] pod "kube-apiserver-calico-236437" in "kube-system" namespace has status "Ready":"True"
	I0317 11:14:49.933956  271403 pod_ready.go:82] duration metric: took 3.442261ms for pod "kube-apiserver-calico-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:14:49.933968  271403 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-calico-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:14:49.937375  271403 pod_ready.go:93] pod "kube-controller-manager-calico-236437" in "kube-system" namespace has status "Ready":"True"
	I0317 11:14:49.937394  271403 pod_ready.go:82] duration metric: took 3.417944ms for pod "kube-controller-manager-calico-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:14:49.937405  271403 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-ntqtp" in "kube-system" namespace to be "Ready" ...
	I0317 11:14:49.941132  271403 pod_ready.go:93] pod "kube-proxy-ntqtp" in "kube-system" namespace has status "Ready":"True"
	I0317 11:14:49.941149  271403 pod_ready.go:82] duration metric: took 3.737656ms for pod "kube-proxy-ntqtp" in "kube-system" namespace to be "Ready" ...
	I0317 11:14:49.941156  271403 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-calico-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:14:50.324133  271403 pod_ready.go:93] pod "kube-scheduler-calico-236437" in "kube-system" namespace has status "Ready":"True"
	I0317 11:14:50.324158  271403 pod_ready.go:82] duration metric: took 382.994064ms for pod "kube-scheduler-calico-236437" in "kube-system" namespace to be "Ready" ...
	I0317 11:14:50.324168  271403 pod_ready.go:39] duration metric: took 12m0.419518216s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:14:50.324192  271403 api_server.go:52] waiting for apiserver process to appear ...
	I0317 11:14:50.324229  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:14:50.324292  271403 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:14:50.361815  271403 cri.go:89] found id: "4e7cd8e8a2904f90b91e00d398eca0d9f526c2698ec55cfa423b32bd27e06da5"
	I0317 11:14:50.361837  271403 cri.go:89] found id: ""
	I0317 11:14:50.361845  271403 logs.go:282] 1 containers: [4e7cd8e8a2904f90b91e00d398eca0d9f526c2698ec55cfa423b32bd27e06da5]
	I0317 11:14:50.361897  271403 ssh_runner.go:195] Run: which crictl
	I0317 11:14:50.365688  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:14:50.365756  271403 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:14:50.406302  271403 cri.go:89] found id: "7eb7e930f0742b9ece56cb7dfaa08969790b4781927f7bcd41ee029fe212376b"
	I0317 11:14:50.406327  271403 cri.go:89] found id: ""
	I0317 11:14:50.406346  271403 logs.go:282] 1 containers: [7eb7e930f0742b9ece56cb7dfaa08969790b4781927f7bcd41ee029fe212376b]
	I0317 11:14:50.406399  271403 ssh_runner.go:195] Run: which crictl
	I0317 11:14:50.409825  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:14:50.409888  271403 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:14:50.450347  271403 cri.go:89] found id: ""
	I0317 11:14:50.450378  271403 logs.go:282] 0 containers: []
	W0317 11:14:50.450390  271403 logs.go:284] No container was found matching "coredns"
	I0317 11:14:50.450405  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:14:50.450467  271403 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:14:50.489073  271403 cri.go:89] found id: "7382088026afb348904d4318fdc247569b8c69b190b5f937cfc47f75ef9b3954"
	I0317 11:14:50.489100  271403 cri.go:89] found id: ""
	I0317 11:14:50.489109  271403 logs.go:282] 1 containers: [7382088026afb348904d4318fdc247569b8c69b190b5f937cfc47f75ef9b3954]
	I0317 11:14:50.489167  271403 ssh_runner.go:195] Run: which crictl
	I0317 11:14:50.493165  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:14:50.493230  271403 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:14:50.535098  271403 cri.go:89] found id: "87614e0ec48e3bc42652910569b32d09ba5585f5eb7121d01a036b7063528afa"
	I0317 11:14:50.535117  271403 cri.go:89] found id: ""
	I0317 11:14:50.535123  271403 logs.go:282] 1 containers: [87614e0ec48e3bc42652910569b32d09ba5585f5eb7121d01a036b7063528afa]
	I0317 11:14:50.535170  271403 ssh_runner.go:195] Run: which crictl
	I0317 11:14:50.539473  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:14:50.539539  271403 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:14:50.580732  271403 cri.go:89] found id: "1f6b11dfdd302a120d4d828ffa77348e9442b9649f53a58dd3281b53ef2d044e"
	I0317 11:14:50.580759  271403 cri.go:89] found id: ""
	I0317 11:14:50.580768  271403 logs.go:282] 1 containers: [1f6b11dfdd302a120d4d828ffa77348e9442b9649f53a58dd3281b53ef2d044e]
	I0317 11:14:50.580846  271403 ssh_runner.go:195] Run: which crictl
	I0317 11:14:50.584765  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:14:50.584823  271403 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:14:50.623413  271403 cri.go:89] found id: ""
	I0317 11:14:50.623434  271403 logs.go:282] 0 containers: []
	W0317 11:14:50.623442  271403 logs.go:284] No container was found matching "kindnet"
	I0317 11:14:50.623460  271403 logs.go:123] Gathering logs for kube-controller-manager [1f6b11dfdd302a120d4d828ffa77348e9442b9649f53a58dd3281b53ef2d044e] ...
	I0317 11:14:50.623472  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b11dfdd302a120d4d828ffa77348e9442b9649f53a58dd3281b53ef2d044e"
	I0317 11:14:50.674530  271403 logs.go:123] Gathering logs for kubelet ...
	I0317 11:14:50.674555  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:14:50.846175  271403 logs.go:123] Gathering logs for dmesg ...
	I0317 11:14:50.846206  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:14:50.865813  271403 logs.go:123] Gathering logs for kube-apiserver [4e7cd8e8a2904f90b91e00d398eca0d9f526c2698ec55cfa423b32bd27e06da5] ...
	I0317 11:14:50.865844  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e7cd8e8a2904f90b91e00d398eca0d9f526c2698ec55cfa423b32bd27e06da5"
	I0317 11:14:50.906505  271403 logs.go:123] Gathering logs for etcd [7eb7e930f0742b9ece56cb7dfaa08969790b4781927f7bcd41ee029fe212376b] ...
	I0317 11:14:50.906534  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7eb7e930f0742b9ece56cb7dfaa08969790b4781927f7bcd41ee029fe212376b"
	I0317 11:14:50.948788  271403 logs.go:123] Gathering logs for kube-proxy [87614e0ec48e3bc42652910569b32d09ba5585f5eb7121d01a036b7063528afa] ...
	I0317 11:14:50.948816  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87614e0ec48e3bc42652910569b32d09ba5585f5eb7121d01a036b7063528afa"
	I0317 11:14:50.981793  271403 logs.go:123] Gathering logs for containerd ...
	I0317 11:14:50.981821  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:14:51.051173  271403 logs.go:123] Gathering logs for container status ...
	I0317 11:14:51.051215  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:14:51.087035  271403 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:14:51.087064  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:14:51.170343  271403 logs.go:123] Gathering logs for kube-scheduler [7382088026afb348904d4318fdc247569b8c69b190b5f937cfc47f75ef9b3954] ...
	I0317 11:14:51.170372  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7382088026afb348904d4318fdc247569b8c69b190b5f937cfc47f75ef9b3954"
	I0317 11:14:53.727408  271403 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 11:14:53.741296  271403 api_server.go:72] duration metric: took 12m4.724234897s to wait for apiserver process to appear ...
	I0317 11:14:53.741328  271403 api_server.go:88] waiting for apiserver healthz status ...
	I0317 11:14:53.741361  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:14:53.741425  271403 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:14:53.777109  271403 cri.go:89] found id: "4e7cd8e8a2904f90b91e00d398eca0d9f526c2698ec55cfa423b32bd27e06da5"
	I0317 11:14:53.777138  271403 cri.go:89] found id: ""
	I0317 11:14:53.777148  271403 logs.go:282] 1 containers: [4e7cd8e8a2904f90b91e00d398eca0d9f526c2698ec55cfa423b32bd27e06da5]
	I0317 11:14:53.777205  271403 ssh_runner.go:195] Run: which crictl
	I0317 11:14:53.781089  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:14:53.781147  271403 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:14:53.813282  271403 cri.go:89] found id: "7eb7e930f0742b9ece56cb7dfaa08969790b4781927f7bcd41ee029fe212376b"
	I0317 11:14:53.813308  271403 cri.go:89] found id: ""
	I0317 11:14:53.813318  271403 logs.go:282] 1 containers: [7eb7e930f0742b9ece56cb7dfaa08969790b4781927f7bcd41ee029fe212376b]
	I0317 11:14:53.813365  271403 ssh_runner.go:195] Run: which crictl
	I0317 11:14:53.816873  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:14:53.816944  271403 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:14:53.851523  271403 cri.go:89] found id: ""
	I0317 11:14:53.851551  271403 logs.go:282] 0 containers: []
	W0317 11:14:53.851562  271403 logs.go:284] No container was found matching "coredns"
	I0317 11:14:53.851570  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:14:53.851628  271403 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:14:53.885389  271403 cri.go:89] found id: "7382088026afb348904d4318fdc247569b8c69b190b5f937cfc47f75ef9b3954"
	I0317 11:14:53.885418  271403 cri.go:89] found id: ""
	I0317 11:14:53.885428  271403 logs.go:282] 1 containers: [7382088026afb348904d4318fdc247569b8c69b190b5f937cfc47f75ef9b3954]
	I0317 11:14:53.885483  271403 ssh_runner.go:195] Run: which crictl
	I0317 11:14:53.889162  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:14:53.889224  271403 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:14:53.924005  271403 cri.go:89] found id: "87614e0ec48e3bc42652910569b32d09ba5585f5eb7121d01a036b7063528afa"
	I0317 11:14:53.924030  271403 cri.go:89] found id: ""
	I0317 11:14:53.924038  271403 logs.go:282] 1 containers: [87614e0ec48e3bc42652910569b32d09ba5585f5eb7121d01a036b7063528afa]
	I0317 11:14:53.924095  271403 ssh_runner.go:195] Run: which crictl
	I0317 11:14:53.927904  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:14:53.927972  271403 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:14:53.962222  271403 cri.go:89] found id: "1f6b11dfdd302a120d4d828ffa77348e9442b9649f53a58dd3281b53ef2d044e"
	I0317 11:14:53.962248  271403 cri.go:89] found id: ""
	I0317 11:14:53.962258  271403 logs.go:282] 1 containers: [1f6b11dfdd302a120d4d828ffa77348e9442b9649f53a58dd3281b53ef2d044e]
	I0317 11:14:53.962310  271403 ssh_runner.go:195] Run: which crictl
	I0317 11:14:53.966086  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:14:53.966150  271403 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:14:53.998495  271403 cri.go:89] found id: ""
	I0317 11:14:53.998522  271403 logs.go:282] 0 containers: []
	W0317 11:14:53.998534  271403 logs.go:284] No container was found matching "kindnet"
	I0317 11:14:53.998550  271403 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:14:53.998564  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:14:54.088320  271403 logs.go:123] Gathering logs for kube-apiserver [4e7cd8e8a2904f90b91e00d398eca0d9f526c2698ec55cfa423b32bd27e06da5] ...
	I0317 11:14:54.088351  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e7cd8e8a2904f90b91e00d398eca0d9f526c2698ec55cfa423b32bd27e06da5"
	I0317 11:14:54.131858  271403 logs.go:123] Gathering logs for kube-scheduler [7382088026afb348904d4318fdc247569b8c69b190b5f937cfc47f75ef9b3954] ...
	I0317 11:14:54.131891  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7382088026afb348904d4318fdc247569b8c69b190b5f937cfc47f75ef9b3954"
	I0317 11:14:54.174562  271403 logs.go:123] Gathering logs for kube-proxy [87614e0ec48e3bc42652910569b32d09ba5585f5eb7121d01a036b7063528afa] ...
	I0317 11:14:54.174605  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87614e0ec48e3bc42652910569b32d09ba5585f5eb7121d01a036b7063528afa"
	I0317 11:14:54.209981  271403 logs.go:123] Gathering logs for kube-controller-manager [1f6b11dfdd302a120d4d828ffa77348e9442b9649f53a58dd3281b53ef2d044e] ...
	I0317 11:14:54.210018  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b11dfdd302a120d4d828ffa77348e9442b9649f53a58dd3281b53ef2d044e"
	I0317 11:14:54.258811  271403 logs.go:123] Gathering logs for kubelet ...
	I0317 11:14:54.258845  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:14:54.403463  271403 logs.go:123] Gathering logs for dmesg ...
	I0317 11:14:54.403499  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:14:54.424255  271403 logs.go:123] Gathering logs for etcd [7eb7e930f0742b9ece56cb7dfaa08969790b4781927f7bcd41ee029fe212376b] ...
	I0317 11:14:54.424286  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7eb7e930f0742b9ece56cb7dfaa08969790b4781927f7bcd41ee029fe212376b"
	I0317 11:14:54.466211  271403 logs.go:123] Gathering logs for containerd ...
	I0317 11:14:54.466245  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:14:54.549993  271403 logs.go:123] Gathering logs for container status ...
	I0317 11:14:54.550029  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:14:57.097786  271403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0317 11:14:57.101448  271403 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0317 11:14:57.102433  271403 api_server.go:141] control plane version: v1.32.2
	I0317 11:14:57.102460  271403 api_server.go:131] duration metric: took 3.361123531s to wait for apiserver health ...
	I0317 11:14:57.102469  271403 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 11:14:57.102495  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:14:57.102548  271403 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:14:57.156854  271403 cri.go:89] found id: "4e7cd8e8a2904f90b91e00d398eca0d9f526c2698ec55cfa423b32bd27e06da5"
	I0317 11:14:57.156877  271403 cri.go:89] found id: ""
	I0317 11:14:57.156884  271403 logs.go:282] 1 containers: [4e7cd8e8a2904f90b91e00d398eca0d9f526c2698ec55cfa423b32bd27e06da5]
	I0317 11:14:57.156927  271403 ssh_runner.go:195] Run: which crictl
	I0317 11:14:57.160597  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:14:57.160668  271403 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:14:57.194320  271403 cri.go:89] found id: "7eb7e930f0742b9ece56cb7dfaa08969790b4781927f7bcd41ee029fe212376b"
	I0317 11:14:57.194345  271403 cri.go:89] found id: ""
	I0317 11:14:57.194354  271403 logs.go:282] 1 containers: [7eb7e930f0742b9ece56cb7dfaa08969790b4781927f7bcd41ee029fe212376b]
	I0317 11:14:57.194415  271403 ssh_runner.go:195] Run: which crictl
	I0317 11:14:57.198330  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:14:57.198430  271403 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:14:57.239357  271403 cri.go:89] found id: ""
	I0317 11:14:57.239393  271403 logs.go:282] 0 containers: []
	W0317 11:14:57.239405  271403 logs.go:284] No container was found matching "coredns"
	I0317 11:14:57.239415  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:14:57.239477  271403 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:14:57.282061  271403 cri.go:89] found id: "7382088026afb348904d4318fdc247569b8c69b190b5f937cfc47f75ef9b3954"
	I0317 11:14:57.282095  271403 cri.go:89] found id: ""
	I0317 11:14:57.282105  271403 logs.go:282] 1 containers: [7382088026afb348904d4318fdc247569b8c69b190b5f937cfc47f75ef9b3954]
	I0317 11:14:57.282161  271403 ssh_runner.go:195] Run: which crictl
	I0317 11:14:57.286572  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:14:57.286626  271403 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:14:57.331623  271403 cri.go:89] found id: "87614e0ec48e3bc42652910569b32d09ba5585f5eb7121d01a036b7063528afa"
	I0317 11:14:57.331653  271403 cri.go:89] found id: ""
	I0317 11:14:57.331662  271403 logs.go:282] 1 containers: [87614e0ec48e3bc42652910569b32d09ba5585f5eb7121d01a036b7063528afa]
	I0317 11:14:57.331716  271403 ssh_runner.go:195] Run: which crictl
	I0317 11:14:57.336845  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:14:57.336924  271403 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:14:57.373400  271403 cri.go:89] found id: "1f6b11dfdd302a120d4d828ffa77348e9442b9649f53a58dd3281b53ef2d044e"
	I0317 11:14:57.373422  271403 cri.go:89] found id: ""
	I0317 11:14:57.373429  271403 logs.go:282] 1 containers: [1f6b11dfdd302a120d4d828ffa77348e9442b9649f53a58dd3281b53ef2d044e]
	I0317 11:14:57.373473  271403 ssh_runner.go:195] Run: which crictl
	I0317 11:14:57.377779  271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:14:57.377867  271403 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:14:57.416248  271403 cri.go:89] found id: ""
	I0317 11:14:57.416274  271403 logs.go:282] 0 containers: []
	W0317 11:14:57.416286  271403 logs.go:284] No container was found matching "kindnet"
	I0317 11:14:57.416304  271403 logs.go:123] Gathering logs for containerd ...
	I0317 11:14:57.416318  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:14:57.495038  271403 logs.go:123] Gathering logs for container status ...
	I0317 11:14:57.495080  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:14:57.550774  271403 logs.go:123] Gathering logs for kubelet ...
	I0317 11:14:57.550807  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:14:57.704339  271403 logs.go:123] Gathering logs for dmesg ...
	I0317 11:14:57.704379  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:14:57.732959  271403 logs.go:123] Gathering logs for kube-apiserver [4e7cd8e8a2904f90b91e00d398eca0d9f526c2698ec55cfa423b32bd27e06da5] ...
	I0317 11:14:57.733003  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e7cd8e8a2904f90b91e00d398eca0d9f526c2698ec55cfa423b32bd27e06da5"
	I0317 11:14:57.780012  271403 logs.go:123] Gathering logs for etcd [7eb7e930f0742b9ece56cb7dfaa08969790b4781927f7bcd41ee029fe212376b] ...
	I0317 11:14:57.780102  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7eb7e930f0742b9ece56cb7dfaa08969790b4781927f7bcd41ee029fe212376b"
	I0317 11:14:57.829187  271403 logs.go:123] Gathering logs for kube-proxy [87614e0ec48e3bc42652910569b32d09ba5585f5eb7121d01a036b7063528afa] ...
	I0317 11:14:57.829230  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87614e0ec48e3bc42652910569b32d09ba5585f5eb7121d01a036b7063528afa"
	I0317 11:14:57.865700  271403 logs.go:123] Gathering logs for kube-controller-manager [1f6b11dfdd302a120d4d828ffa77348e9442b9649f53a58dd3281b53ef2d044e] ...
	I0317 11:14:57.865735  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b11dfdd302a120d4d828ffa77348e9442b9649f53a58dd3281b53ef2d044e"
	I0317 11:14:57.914784  271403 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:14:57.914826  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:14:58.010613  271403 logs.go:123] Gathering logs for kube-scheduler [7382088026afb348904d4318fdc247569b8c69b190b5f937cfc47f75ef9b3954] ...
	I0317 11:14:58.010648  271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7382088026afb348904d4318fdc247569b8c69b190b5f937cfc47f75ef9b3954"
	I0317 11:15:00.564963  271403 system_pods.go:59] 9 kube-system pods found
	I0317 11:15:00.565011  271403 system_pods.go:61] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:15:00.565025  271403 system_pods.go:61] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:15:00.565038  271403 system_pods.go:61] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:15:00.565044  271403 system_pods.go:61] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:15:00.565051  271403 system_pods.go:61] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:15:00.565057  271403 system_pods.go:61] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:15:00.565062  271403 system_pods.go:61] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:15:00.565067  271403 system_pods.go:61] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:15:00.565071  271403 system_pods.go:61] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:15:00.565079  271403 system_pods.go:74] duration metric: took 3.462601221s to wait for pod list to return data ...
	I0317 11:15:00.565095  271403 default_sa.go:34] waiting for default service account to be created ...
	I0317 11:15:00.567975  271403 default_sa.go:45] found service account: "default"
	I0317 11:15:00.568001  271403 default_sa.go:55] duration metric: took 2.898564ms for default service account to be created ...
	I0317 11:15:00.568011  271403 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 11:15:00.570838  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:15:00.570880  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:15:00.570889  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:15:00.570906  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:15:00.570915  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:15:00.570924  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:15:00.570929  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:15:00.570937  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:15:00.570943  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:15:00.570949  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:15:00.570975  271403 retry.go:31] will retry after 221.469968ms: missing components: kube-dns
	I0317 11:15:00.796972  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:15:00.797014  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:15:00.797028  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:15:00.797038  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:15:00.797047  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:15:00.797053  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:15:00.797057  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:15:00.797060  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:15:00.797065  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:15:00.797069  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:15:00.797085  271403 retry.go:31] will retry after 341.934929ms: missing components: kube-dns
	I0317 11:15:01.142942  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:15:01.142978  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:15:01.142987  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:15:01.142994  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:15:01.142998  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:15:01.143003  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:15:01.143006  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:15:01.143011  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:15:01.143014  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:15:01.143018  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:15:01.143032  271403 retry.go:31] will retry after 463.929811ms: missing components: kube-dns
	I0317 11:15:01.610860  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:15:01.610894  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:15:01.610904  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:15:01.610913  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:15:01.610920  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:15:01.610924  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:15:01.610928  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:15:01.610932  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:15:01.610936  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:15:01.610942  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:15:01.610959  271403 retry.go:31] will retry after 501.469424ms: missing components: kube-dns
	I0317 11:15:02.116668  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:15:02.116717  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:15:02.116729  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:15:02.116760  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:15:02.116768  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:15:02.116783  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:15:02.116789  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:15:02.116807  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:15:02.116813  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:15:02.116824  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:15:02.116842  271403 retry.go:31] will retry after 675.355148ms: missing components: kube-dns
	I0317 11:15:02.796176  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:15:02.796210  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:15:02.796221  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:15:02.796228  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:15:02.796232  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:15:02.796237  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:15:02.796240  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:15:02.796244  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:15:02.796247  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:15:02.796250  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:15:02.796263  271403 retry.go:31] will retry after 854.928136ms: missing components: kube-dns
	I0317 11:15:03.654725  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:15:03.654759  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:15:03.654767  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:15:03.654776  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:15:03.654781  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:15:03.654786  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:15:03.654789  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:15:03.654793  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:15:03.654796  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:15:03.654799  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:15:03.654812  271403 retry.go:31] will retry after 1.068445256s: missing components: kube-dns
	I0317 11:15:04.727570  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:15:04.727601  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:15:04.727610  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:15:04.727617  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:15:04.727622  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:15:04.727626  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:15:04.727630  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:15:04.727633  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:15:04.727637  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:15:04.727640  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:15:04.727655  271403 retry.go:31] will retry after 1.011170927s: missing components: kube-dns
	I0317 11:15:05.742406  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:15:05.742437  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:15:05.742449  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:15:05.742456  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:15:05.742462  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:15:05.742469  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:15:05.742474  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:15:05.742480  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:15:05.742488  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:15:05.742503  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:15:05.742521  271403 retry.go:31] will retry after 1.727074766s: missing components: kube-dns
	I0317 11:15:07.474851  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:15:07.474885  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:15:07.474893  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:15:07.474901  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:15:07.474907  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:15:07.474912  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:15:07.474916  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:15:07.474920  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:15:07.474924  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:15:07.474928  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:15:07.474944  271403 retry.go:31] will retry after 2.034967039s: missing components: kube-dns
	I0317 11:15:09.513908  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:15:09.513944  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:15:09.513953  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:15:09.513962  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:15:09.513967  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:15:09.513973  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:15:09.513977  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:15:09.513980  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:15:09.513983  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:15:09.513986  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:15:09.514001  271403 retry.go:31] will retry after 1.769448894s: missing components: kube-dns
	I0317 11:15:11.287014  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:15:11.287047  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:15:11.287055  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:15:11.287063  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:15:11.287067  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:15:11.287071  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:15:11.287075  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:15:11.287078  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:15:11.287081  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:15:11.287085  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:15:11.287109  271403 retry.go:31] will retry after 3.203013443s: missing components: kube-dns
	I0317 11:15:14.494652  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:15:14.494694  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:15:14.494706  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:15:14.494716  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:15:14.494722  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:15:14.494729  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:15:14.494734  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:15:14.494740  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:15:14.494745  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:15:14.494751  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:15:14.494770  271403 retry.go:31] will retry after 3.673243782s: missing components: kube-dns
	I0317 11:15:18.174800  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:15:18.174837  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:15:18.174847  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:15:18.174854  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:15:18.174861  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:15:18.174867  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:15:18.174870  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:15:18.174876  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:15:18.174884  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:15:18.174888  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:15:18.174904  271403 retry.go:31] will retry after 3.415392885s: missing components: kube-dns
	I0317 11:15:21.594482  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:15:21.594514  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:15:21.594522  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:15:21.594529  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:15:21.594535  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:15:21.594540  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:15:21.594544  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:15:21.594547  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:15:21.594550  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:15:21.594553  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:15:21.594568  271403 retry.go:31] will retry after 6.399800704s: missing components: kube-dns
	I0317 11:15:27.998736  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:15:27.998768  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:15:27.998781  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:15:27.998788  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:15:27.998792  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:15:27.998797  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:15:27.998800  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:15:27.998804  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:15:27.998807  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:15:27.998810  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:15:27.998826  271403 retry.go:31] will retry after 7.359129054s: missing components: kube-dns
	I0317 11:15:35.362821  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:15:35.362943  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:15:35.362962  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:15:35.362973  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:15:35.362988  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:15:35.362997  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:15:35.363002  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:15:35.363014  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:15:35.363025  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:15:35.363030  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:15:35.363053  271403 retry.go:31] will retry after 11.007398685s: missing components: kube-dns
	I0317 11:15:46.374316  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:15:46.374347  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:15:46.374356  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:15:46.374363  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:15:46.374367  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:15:46.374373  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:15:46.374377  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:15:46.374380  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:15:46.374384  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:15:46.374389  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:15:46.374402  271403 retry.go:31] will retry after 13.180062227s: missing components: kube-dns
	I0317 11:15:59.558188  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:15:59.558221  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:15:59.558232  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:15:59.558239  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:15:59.558243  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:15:59.558247  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:15:59.558250  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:15:59.558253  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:15:59.558257  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:15:59.558260  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:15:59.558274  271403 retry.go:31] will retry after 12.89221211s: missing components: kube-dns
	I0317 11:16:12.454968  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:16:12.455003  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:16:12.455015  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:16:12.455032  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:16:12.455039  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:16:12.455044  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:16:12.455048  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:16:12.455054  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:16:12.455061  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:16:12.455065  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:16:12.455081  271403 retry.go:31] will retry after 17.364942602s: missing components: kube-dns
	I0317 11:16:29.823555  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:16:29.823586  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:16:29.823595  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:16:29.823602  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:16:29.823608  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:16:29.823612  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:16:29.823615  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:16:29.823618  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:16:29.823622  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:16:29.823625  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:16:29.823640  271403 retry.go:31] will retry after 17.370004065s: missing components: kube-dns
	I0317 11:16:47.198398  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:16:47.198427  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:16:47.198435  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:16:47.198442  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:16:47.198446  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:16:47.198451  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:16:47.198455  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:16:47.198459  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:16:47.198462  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:16:47.198465  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:16:47.198480  271403 retry.go:31] will retry after 24.357866835s: missing components: kube-dns
	I0317 11:17:11.562057  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:17:11.562100  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:17:11.562111  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:17:11.562119  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:17:11.562126  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:17:11.562136  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:17:11.562141  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:17:11.562149  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:17:11.562161  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:17:11.562167  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:17:11.562192  271403 retry.go:31] will retry after 38.894278148s: missing components: kube-dns
	I0317 11:17:50.460074  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:17:50.460111  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:17:50.460120  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:17:50.460126  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:17:50.460131  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:17:50.460135  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:17:50.460139  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:17:50.460143  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:17:50.460146  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:17:50.460150  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:17:50.460166  271403 retry.go:31] will retry after 39.430949711s: missing components: kube-dns
	I0317 11:18:29.894514  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:18:29.894546  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:18:29.894555  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:18:29.894563  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:18:29.894567  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:18:29.894573  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:18:29.894579  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:18:29.894588  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:18:29.894593  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:18:29.894602  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:18:29.894620  271403 retry.go:31] will retry after 1m0.36462309s: missing components: kube-dns
	I0317 11:19:30.263928  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:19:30.263968  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:19:30.263978  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:19:30.263984  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:30.263989  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:19:30.263994  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:19:30.263997  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:19:30.264004  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:19:30.264008  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:19:30.264012  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:19:30.264025  271403 retry.go:31] will retry after 1m10.256049964s: missing components: kube-dns
	I0317 11:20:40.524557  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:20:40.524600  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:20:40.524614  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:20:40.524624  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:40.524632  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:20:40.524640  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:20:40.524649  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:20:40.524658  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:20:40.524664  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:20:40.524673  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:20:40.524693  271403 retry.go:31] will retry after 1m5.301611864s: missing components: kube-dns
	I0317 11:21:45.830506  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:21:45.830550  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:21:45.830565  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:21:45.830575  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:21:45.830582  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:21:45.830589  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:21:45.830596  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:21:45.830602  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:21:45.830612  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:21:45.830619  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:21:45.830639  271403 retry.go:31] will retry after 1m6.469274108s: missing components: kube-dns
	I0317 11:22:52.304439  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:22:52.304483  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:22:52.304503  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:22:52.304514  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:22:52.304522  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:22:52.304529  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:22:52.304538  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:22:52.304546  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:22:52.304553  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:22:52.304559  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:22:52.304577  271403 retry.go:31] will retry after 57.75468648s: missing components: kube-dns
	I0317 11:23:50.063088  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:23:50.063127  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:23:50.063136  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:23:50.063153  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:23:50.063159  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:23:50.063166  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:23:50.063169  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:23:50.063174  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:23:50.063177  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:23:50.063180  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:23:50.063197  271403 retry.go:31] will retry after 47.200040689s: missing components: kube-dns
	I0317 11:24:37.266163  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:24:37.266199  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:24:37.266210  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:24:37.266217  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:37.266225  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:24:37.266231  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:24:37.266236  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:24:37.266245  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:24:37.266251  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:24:37.266261  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:24:37.266275  271403 retry.go:31] will retry after 51.703965946s: missing components: kube-dns
	I0317 11:25:28.975667  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:25:28.975708  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:25:28.975723  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:25:28.975732  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:25:28.975739  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:25:28.975745  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:25:28.975750  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:25:28.975758  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:25:28.975764  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:25:28.975773  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:25:28.975793  271403 retry.go:31] will retry after 1m5.809313986s: missing components: kube-dns
	I0317 11:26:34.792964  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:26:34.792999  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:26:34.793007  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:26:34.793018  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:26:34.793023  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:26:34.793027  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:26:34.793030  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:26:34.793034  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:26:34.793037  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:26:34.793041  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:26:34.793054  271403 retry.go:31] will retry after 46.388333894s: missing components: kube-dns
	I0317 11:27:21.185012  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:27:21.185060  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:27:21.185074  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:27:21.185084  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:27:21.185091  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:27:21.185095  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:27:21.185099  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:27:21.185106  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:27:21.185110  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:27:21.185116  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:27:21.185130  271403 retry.go:31] will retry after 1m14.28936614s: missing components: kube-dns
	I0317 11:28:35.478966  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:28:35.479004  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:28:35.479016  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:28:35.479026  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:28:35.479033  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:28:35.479040  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:28:35.479047  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:28:35.479053  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:28:35.479060  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:28:35.479067  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:28:35.479087  271403 retry.go:31] will retry after 50.356839714s: missing components: kube-dns
	I0317 11:29:25.840490  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:29:25.840529  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:29:25.840539  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:29:25.840547  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:29:25.840551  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:29:25.840557  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:29:25.840560  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:29:25.840565  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:29:25.840568  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:29:25.840572  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:29:25.842964  271403 out.go:201] 
	W0317 11:29:25.844361  271403 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W0317 11:29:25.844380  271403 out.go:270] * 
	* 
	W0317 11:29:25.845232  271403 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0317 11:29:25.846427  271403 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (1621.08s)
E0317 11:34:08.330049   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/enable-default-cni-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:34:14.789340   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:34:32.216568   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:34:32.222921   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:34:32.234226   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:34:32.255555   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:34:32.296913   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:34:32.378323   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:34:32.540439   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:34:32.861957   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:34:33.504094   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:34:34.785712   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:34:37.346979   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:34:42.468403   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:34:42.590877   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/auto-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:34:43.849952   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/custom-flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:34:44.177560   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:34:52.710691   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:35:13.192101   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:35:31.396226   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/enable-default-cni-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:35:50.079829   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/flannel-236437/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (646.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-702762 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-702762 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 80 (10m44.594338847s)

                                                
                                                
-- stdout --
	* [old-k8s-version-702762] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20535
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "old-k8s-version-702762" primary control-plane node in "old-k8s-version-702762" cluster
	* Pulling base image v0.0.46-1741860993-20523 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.25 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 11:13:45.531196  317731 out.go:345] Setting OutFile to fd 1 ...
	I0317 11:13:45.531341  317731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:13:45.531353  317731 out.go:358] Setting ErrFile to fd 2...
	I0317 11:13:45.531359  317731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:13:45.531552  317731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
	I0317 11:13:45.532160  317731 out.go:352] Setting JSON to false
	I0317 11:13:45.533501  317731 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3319,"bootTime":1742206707,"procs":326,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 11:13:45.533591  317731 start.go:139] virtualization: kvm guest
	I0317 11:13:45.535618  317731 out.go:177] * [old-k8s-version-702762] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 11:13:45.536679  317731 notify.go:220] Checking for updates...
	I0317 11:13:45.536708  317731 out.go:177]   - MINIKUBE_LOCATION=20535
	I0317 11:13:45.537892  317731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 11:13:45.539189  317731 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 11:13:45.540449  317731 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	I0317 11:13:45.541482  317731 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 11:13:45.542590  317731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 11:13:45.544130  317731 config.go:182] Loaded profile config "calico-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:13:45.544241  317731 config.go:182] Loaded profile config "enable-default-cni-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:13:45.544348  317731 config.go:182] Loaded profile config "kindnet-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:13:45.544443  317731 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 11:13:45.569715  317731 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 11:13:45.569813  317731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 11:13:45.622065  317731 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:73 SystemTime:2025-03-17 11:13:45.610341379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 11:13:45.622180  317731 docker.go:318] overlay module found
	I0317 11:13:45.623951  317731 out.go:177] * Using the docker driver based on user configuration
	I0317 11:13:45.625040  317731 start.go:297] selected driver: docker
	I0317 11:13:45.625053  317731 start.go:901] validating driver "docker" against <nil>
	I0317 11:13:45.625073  317731 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 11:13:45.625908  317731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 11:13:45.674612  317731 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-03-17 11:13:45.665996408 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 11:13:45.674872  317731 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 11:13:45.675204  317731 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 11:13:45.677040  317731 out.go:177] * Using Docker driver with root privileges
	I0317 11:13:45.678167  317731 cni.go:84] Creating CNI manager for ""
	I0317 11:13:45.678226  317731 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 11:13:45.678241  317731 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0317 11:13:45.678295  317731 start.go:340] cluster config:
	{Name:old-k8s-version-702762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-702762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:13:45.679719  317731 out.go:177] * Starting "old-k8s-version-702762" primary control-plane node in "old-k8s-version-702762" cluster
	I0317 11:13:45.680934  317731 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0317 11:13:45.682183  317731 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0317 11:13:45.683424  317731 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0317 11:13:45.683457  317731 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0317 11:13:45.683465  317731 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0317 11:13:45.683481  317731 cache.go:56] Caching tarball of preloaded images
	I0317 11:13:45.683561  317731 preload.go:172] Found /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 11:13:45.683571  317731 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0317 11:13:45.683653  317731 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/config.json ...
	I0317 11:13:45.683670  317731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/config.json: {Name:mk98f40377b0b84ba6f0d85a6eed90f9470bd361 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:13:45.704858  317731 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0317 11:13:45.704877  317731 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0317 11:13:45.704888  317731 cache.go:230] Successfully downloaded all kic artifacts
	I0317 11:13:45.704933  317731 start.go:360] acquireMachinesLock for old-k8s-version-702762: {Name:mk352de156c02c0899b1b1e1e77cd8c28bd2ef51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 11:13:45.705027  317731 start.go:364] duration metric: took 72.791µs to acquireMachinesLock for "old-k8s-version-702762"
	I0317 11:13:45.705069  317731 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-702762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-702762 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 11:13:45.705150  317731 start.go:125] createHost starting for "" (driver="docker")
	I0317 11:13:45.707123  317731 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0317 11:13:45.707372  317731 start.go:159] libmachine.API.Create for "old-k8s-version-702762" (driver="docker")
	I0317 11:13:45.707407  317731 client.go:168] LocalClient.Create starting
	I0317 11:13:45.707476  317731 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem
	I0317 11:13:45.707514  317731 main.go:141] libmachine: Decoding PEM data...
	I0317 11:13:45.707538  317731 main.go:141] libmachine: Parsing certificate...
	I0317 11:13:45.707605  317731 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem
	I0317 11:13:45.707637  317731 main.go:141] libmachine: Decoding PEM data...
	I0317 11:13:45.707661  317731 main.go:141] libmachine: Parsing certificate...
	I0317 11:13:45.708094  317731 cli_runner.go:164] Run: docker network inspect old-k8s-version-702762 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0317 11:13:45.726826  317731 cli_runner.go:211] docker network inspect old-k8s-version-702762 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0317 11:13:45.726921  317731 network_create.go:284] running [docker network inspect old-k8s-version-702762] to gather additional debugging logs...
	I0317 11:13:45.726944  317731 cli_runner.go:164] Run: docker network inspect old-k8s-version-702762
	W0317 11:13:45.743623  317731 cli_runner.go:211] docker network inspect old-k8s-version-702762 returned with exit code 1
	I0317 11:13:45.743653  317731 network_create.go:287] error running [docker network inspect old-k8s-version-702762]: docker network inspect old-k8s-version-702762: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-702762 not found
	I0317 11:13:45.743665  317731 network_create.go:289] output of [docker network inspect old-k8s-version-702762]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-702762 not found
	
	** /stderr **
	I0317 11:13:45.743806  317731 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 11:13:45.761212  317731 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6a2ef9d4bc68 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9a:4d:91:26:57:2c} reservation:<nil>}
	I0317 11:13:45.761905  317731 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-00bf62ef0133 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:c5:34:86:d6:21} reservation:<nil>}
	I0317 11:13:45.762647  317731 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-81e0001ceae7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:6a:cf:1c:79:e6} reservation:<nil>}
	I0317 11:13:45.763120  317731 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-16edb2a113e3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d6:59:06:a9:a8:e8} reservation:<nil>}
	I0317 11:13:45.763949  317731 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-a81c203e078d IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:76:61:16:ca:ff:e4} reservation:<nil>}
	I0317 11:13:45.765760  317731 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f1c6c0}
	I0317 11:13:45.765994  317731 network_create.go:124] attempt to create docker network old-k8s-version-702762 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0317 11:13:45.766174  317731 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-702762 old-k8s-version-702762
	I0317 11:13:45.818319  317731 network_create.go:108] docker network old-k8s-version-702762 192.168.94.0/24 created
	I0317 11:13:45.818365  317731 kic.go:121] calculated static IP "192.168.94.2" for the "old-k8s-version-702762" container
	I0317 11:13:45.818436  317731 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0317 11:13:45.836318  317731 cli_runner.go:164] Run: docker volume create old-k8s-version-702762 --label name.minikube.sigs.k8s.io=old-k8s-version-702762 --label created_by.minikube.sigs.k8s.io=true
	I0317 11:13:45.854793  317731 oci.go:103] Successfully created a docker volume old-k8s-version-702762
	I0317 11:13:45.854894  317731 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-702762-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-702762 --entrypoint /usr/bin/test -v old-k8s-version-702762:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0317 11:13:46.267693  317731 oci.go:107] Successfully prepared a docker volume old-k8s-version-702762
	I0317 11:13:46.267724  317731 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0317 11:13:46.267775  317731 kic.go:194] Starting extracting preloaded images to volume ...
	I0317 11:13:46.267867  317731 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-702762:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0317 11:13:51.453148  317731 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-702762:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (5.185242998s)
	I0317 11:13:51.453178  317731 kic.go:203] duration metric: took 5.185408338s to extract preloaded images to volume ...
	W0317 11:13:51.453304  317731 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0317 11:13:51.453413  317731 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0317 11:13:51.501503  317731 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-702762 --name old-k8s-version-702762 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-702762 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-702762 --network old-k8s-version-702762 --ip 192.168.94.2 --volume old-k8s-version-702762:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0317 11:13:51.750909  317731 cli_runner.go:164] Run: docker container inspect old-k8s-version-702762 --format={{.State.Running}}
	I0317 11:13:51.768520  317731 cli_runner.go:164] Run: docker container inspect old-k8s-version-702762 --format={{.State.Status}}
	I0317 11:13:51.787422  317731 cli_runner.go:164] Run: docker exec old-k8s-version-702762 stat /var/lib/dpkg/alternatives/iptables
	I0317 11:13:51.829100  317731 oci.go:144] the created container "old-k8s-version-702762" has a running status.
	I0317 11:13:51.829149  317731 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20535-4918/.minikube/machines/old-k8s-version-702762/id_rsa...
	I0317 11:13:51.882072  317731 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20535-4918/.minikube/machines/old-k8s-version-702762/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0317 11:13:51.903873  317731 cli_runner.go:164] Run: docker container inspect old-k8s-version-702762 --format={{.State.Status}}
	I0317 11:13:51.923168  317731 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0317 11:13:51.923196  317731 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-702762 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0317 11:13:51.963812  317731 cli_runner.go:164] Run: docker container inspect old-k8s-version-702762 --format={{.State.Status}}
	I0317 11:13:51.982547  317731 machine.go:93] provisionDockerMachine start ...
	I0317 11:13:51.982663  317731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-702762
	I0317 11:13:52.005748  317731 main.go:141] libmachine: Using SSH client type: native
	I0317 11:13:52.005979  317731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0317 11:13:52.005991  317731 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 11:13:52.006668  317731 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46116->127.0.0.1:33093: read: connection reset by peer
	I0317 11:13:55.142672  317731 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-702762
	
	I0317 11:13:55.142702  317731 ubuntu.go:169] provisioning hostname "old-k8s-version-702762"
	I0317 11:13:55.142765  317731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-702762
	I0317 11:13:55.161508  317731 main.go:141] libmachine: Using SSH client type: native
	I0317 11:13:55.161783  317731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0317 11:13:55.161800  317731 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-702762 && echo "old-k8s-version-702762" | sudo tee /etc/hostname
	I0317 11:13:55.314011  317731 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-702762
	
	I0317 11:13:55.314115  317731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-702762
	I0317 11:13:55.331271  317731 main.go:141] libmachine: Using SSH client type: native
	I0317 11:13:55.331525  317731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0317 11:13:55.331544  317731 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-702762' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-702762/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-702762' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 11:13:55.463245  317731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 11:13:55.463308  317731 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20535-4918/.minikube CaCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20535-4918/.minikube}
	I0317 11:13:55.463333  317731 ubuntu.go:177] setting up certificates
	I0317 11:13:55.463349  317731 provision.go:84] configureAuth start
	I0317 11:13:55.463397  317731 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-702762
	I0317 11:13:55.480152  317731 provision.go:143] copyHostCerts
	I0317 11:13:55.480219  317731 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem, removing ...
	I0317 11:13:55.480230  317731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem
	I0317 11:13:55.480298  317731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem (1082 bytes)
	I0317 11:13:55.480384  317731 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem, removing ...
	I0317 11:13:55.480392  317731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem
	I0317 11:13:55.480415  317731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem (1123 bytes)
	I0317 11:13:55.480467  317731 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem, removing ...
	I0317 11:13:55.480474  317731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem
	I0317 11:13:55.480493  317731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem (1679 bytes)
	I0317 11:13:55.480544  317731 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-702762 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-702762]
	I0317 11:13:56.012720  317731 provision.go:177] copyRemoteCerts
	I0317 11:13:56.012791  317731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 11:13:56.012830  317731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-702762
	I0317 11:13:56.031655  317731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/old-k8s-version-702762/id_rsa Username:docker}
	I0317 11:13:56.131599  317731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 11:13:56.154451  317731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0317 11:13:56.175471  317731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0317 11:13:56.196642  317731 provision.go:87] duration metric: took 733.277885ms to configureAuth
	I0317 11:13:56.196669  317731 ubuntu.go:193] setting minikube options for container-runtime
	I0317 11:13:56.196834  317731 config.go:182] Loaded profile config "old-k8s-version-702762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0317 11:13:56.196845  317731 machine.go:96] duration metric: took 4.214263553s to provisionDockerMachine
	I0317 11:13:56.196851  317731 client.go:171] duration metric: took 10.489434621s to LocalClient.Create
	I0317 11:13:56.196876  317731 start.go:167] duration metric: took 10.489505218s to libmachine.API.Create "old-k8s-version-702762"
	I0317 11:13:56.196891  317731 start.go:293] postStartSetup for "old-k8s-version-702762" (driver="docker")
	I0317 11:13:56.196903  317731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 11:13:56.196949  317731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 11:13:56.196987  317731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-702762
	I0317 11:13:56.216128  317731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/old-k8s-version-702762/id_rsa Username:docker}
	I0317 11:13:56.312038  317731 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 11:13:56.315013  317731 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0317 11:13:56.315050  317731 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0317 11:13:56.315058  317731 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0317 11:13:56.315065  317731 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0317 11:13:56.315073  317731 filesync.go:126] Scanning /home/jenkins/minikube-integration/20535-4918/.minikube/addons for local assets ...
	I0317 11:13:56.315120  317731 filesync.go:126] Scanning /home/jenkins/minikube-integration/20535-4918/.minikube/files for local assets ...
	I0317 11:13:56.315206  317731 filesync.go:149] local asset: /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem -> 116902.pem in /etc/ssl/certs
	I0317 11:13:56.315317  317731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 11:13:56.322989  317731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem --> /etc/ssl/certs/116902.pem (1708 bytes)
	I0317 11:13:56.345414  317731 start.go:296] duration metric: took 148.509327ms for postStartSetup
	I0317 11:13:56.345752  317731 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-702762
	I0317 11:13:56.363862  317731 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/config.json ...
	I0317 11:13:56.364167  317731 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 11:13:56.364215  317731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-702762
	I0317 11:13:56.381420  317731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/old-k8s-version-702762/id_rsa Username:docker}
	I0317 11:13:56.472298  317731 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0317 11:13:56.476471  317731 start.go:128] duration metric: took 10.771302556s to createHost
	I0317 11:13:56.476493  317731 start.go:83] releasing machines lock for "old-k8s-version-702762", held for 10.771451132s
	I0317 11:13:56.476560  317731 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-702762
	I0317 11:13:56.493436  317731 ssh_runner.go:195] Run: cat /version.json
	I0317 11:13:56.493480  317731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-702762
	I0317 11:13:56.493542  317731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 11:13:56.493611  317731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-702762
	I0317 11:13:56.512817  317731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/old-k8s-version-702762/id_rsa Username:docker}
	I0317 11:13:56.513058  317731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/old-k8s-version-702762/id_rsa Username:docker}
	I0317 11:13:56.618816  317731 ssh_runner.go:195] Run: systemctl --version
	I0317 11:13:56.700590  317731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0317 11:13:56.704992  317731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0317 11:13:56.727462  317731 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0317 11:13:56.727532  317731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 11:13:56.751593  317731 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0317 11:13:56.751618  317731 start.go:495] detecting cgroup driver to use...
	I0317 11:13:56.751648  317731 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0317 11:13:56.751692  317731 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 11:13:56.762529  317731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 11:13:56.772386  317731 docker.go:217] disabling cri-docker service (if available) ...
	I0317 11:13:56.772444  317731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 11:13:56.784823  317731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 11:13:56.797188  317731 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 11:13:56.865863  317731 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 11:13:56.946905  317731 docker.go:233] disabling docker service ...
	I0317 11:13:56.946964  317731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 11:13:56.965669  317731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 11:13:56.978040  317731 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 11:13:57.055536  317731 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 11:13:57.133506  317731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 11:13:57.144267  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 11:13:57.160241  317731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0317 11:13:57.169876  317731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 11:13:57.178745  317731 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 11:13:57.178816  317731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 11:13:57.187713  317731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:13:57.196114  317731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 11:13:57.204508  317731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:13:57.212818  317731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 11:13:57.220661  317731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 11:13:57.229243  317731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 11:13:57.236718  317731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 11:13:57.244286  317731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:13:57.325492  317731 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 11:13:57.441911  317731 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0317 11:13:57.441970  317731 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0317 11:13:57.445923  317731 start.go:563] Will wait 60s for crictl version
	I0317 11:13:57.445977  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:13:57.450251  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 11:13:57.483876  317731 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.25
	RuntimeApiVersion:  v1
	I0317 11:13:57.483938  317731 ssh_runner.go:195] Run: containerd --version
	I0317 11:13:57.505922  317731 ssh_runner.go:195] Run: containerd --version
	I0317 11:13:57.533163  317731 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.25 ...
	I0317 11:13:57.534292  317731 cli_runner.go:164] Run: docker network inspect old-k8s-version-702762 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 11:13:57.552061  317731 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0317 11:13:57.555974  317731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:13:57.566454  317731 kubeadm.go:883] updating cluster {Name:old-k8s-version-702762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-702762 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 11:13:57.566567  317731 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0317 11:13:57.566617  317731 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 11:13:57.598464  317731 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0317 11:13:57.598534  317731 ssh_runner.go:195] Run: which lz4
	I0317 11:13:57.602110  317731 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0317 11:13:57.605453  317731 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0317 11:13:57.605479  317731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (472503869 bytes)
	I0317 11:13:58.488886  317731 containerd.go:563] duration metric: took 886.800723ms to copy over tarball
	I0317 11:13:58.488952  317731 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0317 11:14:00.862660  317731 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.373682213s)
	I0317 11:14:00.862686  317731 containerd.go:570] duration metric: took 2.373773344s to extract the tarball
	I0317 11:14:00.862693  317731 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0317 11:14:00.933354  317731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:14:01.008262  317731 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 11:14:01.102305  317731 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 11:14:01.136171  317731 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0317 11:14:01.136199  317731 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0317 11:14:01.136268  317731 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0317 11:14:01.136276  317731 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:14:01.136317  317731 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0317 11:14:01.136338  317731 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0317 11:14:01.136354  317731 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0317 11:14:01.136317  317731 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0317 11:14:01.136336  317731 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0317 11:14:01.136276  317731 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 11:14:01.137714  317731 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 11:14:01.137725  317731 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0317 11:14:01.137737  317731 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0317 11:14:01.137716  317731 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0317 11:14:01.137724  317731 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:14:01.137739  317731 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0317 11:14:01.137731  317731 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0317 11:14:01.137799  317731 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0317 11:14:01.332538  317731 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.2" and sha "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c"
	I0317 11:14:01.332617  317731 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.2
	I0317 11:14:01.354625  317731 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.20.0" and sha "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99"
	I0317 11:14:01.354687  317731 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.20.0
	I0317 11:14:01.354743  317731 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0317 11:14:01.354811  317731 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0317 11:14:01.354854  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:14:01.369897  317731 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.20.0" and sha "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080"
	I0317 11:14:01.369960  317731 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 11:14:01.375433  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0317 11:14:01.375474  317731 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0317 11:14:01.375520  317731 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0317 11:14:01.375566  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:14:01.387335  317731 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.20.0" and sha "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc"
	I0317 11:14:01.387402  317731 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.20.0
	I0317 11:14:01.391639  317731 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0317 11:14:01.391687  317731 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 11:14:01.391731  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:14:01.395017  317731 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.4.13-0" and sha "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934"
	I0317 11:14:01.395070  317731 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.4.13-0
	I0317 11:14:01.410972  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0317 11:14:01.410978  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0317 11:14:01.411025  317731 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0317 11:14:01.411063  317731 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0317 11:14:01.411076  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 11:14:01.411103  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:14:01.416120  317731 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0317 11:14:01.416159  317731 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0317 11:14:01.416191  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:14:01.428179  317731 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns:1.7.0" and sha "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16"
	I0317 11:14:01.428241  317731 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns:1.7.0
	I0317 11:14:01.452708  317731 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.20.0" and sha "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899"
	I0317 11:14:01.452775  317731 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.20.0
	I0317 11:14:01.504025  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0317 11:14:01.504116  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 11:14:01.504137  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0317 11:14:01.504206  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0317 11:14:01.504215  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0317 11:14:01.507186  317731 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0317 11:14:01.507294  317731 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0317 11:14:01.507337  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:14:01.533095  317731 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0317 11:14:01.533157  317731 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0317 11:14:01.533204  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:14:01.625593  317731 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0317 11:14:01.625647  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0317 11:14:01.625702  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 11:14:01.625734  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0317 11:14:01.625779  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0317 11:14:01.625817  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0317 11:14:01.625866  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0317 11:14:01.810623  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0317 11:14:01.810634  317731 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0317 11:14:01.810697  317731 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0317 11:14:01.810780  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0317 11:14:01.812263  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0317 11:14:01.812314  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0317 11:14:01.911299  317731 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0317 11:14:01.911382  317731 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0317 11:14:01.917459  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0317 11:14:01.917539  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0317 11:14:01.954315  317731 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0317 11:14:01.954369  317731 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0317 11:14:02.444912  317731 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0317 11:14:02.444985  317731 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:14:02.469554  317731 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0317 11:14:02.469594  317731 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:14:02.469630  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:14:02.473095  317731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:14:02.518494  317731 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0317 11:14:02.518606  317731 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0317 11:14:02.522105  317731 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0317 11:14:02.522134  317731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0317 11:14:02.585788  317731 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0317 11:14:02.585862  317731 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0317 11:14:02.932926  317731 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0317 11:14:02.932983  317731 cache_images.go:92] duration metric: took 1.796767053s to LoadCachedImages
	W0317 11:14:02.933058  317731 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0317 11:14:02.933076  317731 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.20.0 containerd true true} ...
	I0317 11:14:02.933179  317731 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-702762 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-702762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 11:14:02.933246  317731 ssh_runner.go:195] Run: sudo crictl info
	I0317 11:14:02.966646  317731 cni.go:84] Creating CNI manager for ""
	I0317 11:14:02.966676  317731 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 11:14:02.966690  317731 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 11:14:02.966712  317731 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-702762 NodeName:old-k8s-version-702762 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0317 11:14:02.966892  317731 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-702762"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 11:14:02.966960  317731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0317 11:14:02.975442  317731 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 11:14:02.975501  317731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 11:14:02.984040  317731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0317 11:14:03.001124  317731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 11:14:03.019205  317731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0317 11:14:03.035497  317731 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0317 11:14:03.038673  317731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:14:03.049141  317731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:14:03.130168  317731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:14:03.142622  317731 certs.go:68] Setting up /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762 for IP: 192.168.94.2
	I0317 11:14:03.142684  317731 certs.go:194] generating shared ca certs ...
	I0317 11:14:03.142709  317731 certs.go:226] acquiring lock for ca certs: {Name:mkf58624c63680e02907d28348d45986283847c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:14:03.142878  317731 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20535-4918/.minikube/ca.key
	I0317 11:14:03.142934  317731 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.key
	I0317 11:14:03.142945  317731 certs.go:256] generating profile certs ...
	I0317 11:14:03.142995  317731 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/client.key
	I0317 11:14:03.143009  317731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/client.crt with IP's: []
	I0317 11:14:03.258020  317731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/client.crt ...
	I0317 11:14:03.258050  317731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/client.crt: {Name:mk343b9653ef05065d7bd48a926765731d63b6ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:14:03.258213  317731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/client.key ...
	I0317 11:14:03.258225  317731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/client.key: {Name:mkbfd117b4015f5fe6e5901916fec6c49ee04b7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:14:03.258321  317731 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/apiserver.key.52c4ab9e
	I0317 11:14:03.258337  317731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/apiserver.crt.52c4ab9e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0317 11:14:03.657885  317731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/apiserver.crt.52c4ab9e ...
	I0317 11:14:03.657913  317731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/apiserver.crt.52c4ab9e: {Name:mkf0c6ad1e4dcff8b69bcb7a42d7aa34a3e1c819 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:14:03.658078  317731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/apiserver.key.52c4ab9e ...
	I0317 11:14:03.658091  317731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/apiserver.key.52c4ab9e: {Name:mkda872f1ac7903e635e8072da446e0e16014573 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:14:03.658159  317731 certs.go:381] copying /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/apiserver.crt.52c4ab9e -> /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/apiserver.crt
	I0317 11:14:03.658227  317731 certs.go:385] copying /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/apiserver.key.52c4ab9e -> /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/apiserver.key
	I0317 11:14:03.658285  317731 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/proxy-client.key
	I0317 11:14:03.658304  317731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/proxy-client.crt with IP's: []
	I0317 11:14:03.825286  317731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/proxy-client.crt ...
	I0317 11:14:03.825318  317731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/proxy-client.crt: {Name:mkcd39a108d5fc4c2a56eca86811c841e44f34cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:14:03.825497  317731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/proxy-client.key ...
	I0317 11:14:03.825514  317731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/proxy-client.key: {Name:mk8f9bd1a88e5c243ccd9b5a27fef080c8fedfb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:14:03.825707  317731 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690.pem (1338 bytes)
	W0317 11:14:03.825744  317731 certs.go:480] ignoring /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690_empty.pem, impossibly tiny 0 bytes
	I0317 11:14:03.825756  317731 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 11:14:03.825790  317731 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem (1082 bytes)
	I0317 11:14:03.825819  317731 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem (1123 bytes)
	I0317 11:14:03.825844  317731 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem (1679 bytes)
	I0317 11:14:03.825885  317731 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem (1708 bytes)
	I0317 11:14:03.826445  317731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 11:14:03.849549  317731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 11:14:03.873518  317731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 11:14:03.897267  317731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 11:14:03.920004  317731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0317 11:14:03.944089  317731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 11:14:03.966636  317731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 11:14:03.989171  317731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0317 11:14:04.012627  317731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem --> /usr/share/ca-certificates/116902.pem (1708 bytes)
	I0317 11:14:04.036798  317731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 11:14:04.058788  317731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690.pem --> /usr/share/ca-certificates/11690.pem (1338 bytes)
	I0317 11:14:04.081287  317731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 11:14:04.096920  317731 ssh_runner.go:195] Run: openssl version
	I0317 11:14:04.102005  317731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11690.pem && ln -fs /usr/share/ca-certificates/11690.pem /etc/ssl/certs/11690.pem"
	I0317 11:14:04.110692  317731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11690.pem
	I0317 11:14:04.113686  317731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:32 /usr/share/ca-certificates/11690.pem
	I0317 11:14:04.113733  317731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11690.pem
	I0317 11:14:04.120129  317731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11690.pem /etc/ssl/certs/51391683.0"
	I0317 11:14:04.128752  317731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116902.pem && ln -fs /usr/share/ca-certificates/116902.pem /etc/ssl/certs/116902.pem"
	I0317 11:14:04.136887  317731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116902.pem
	I0317 11:14:04.139977  317731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:32 /usr/share/ca-certificates/116902.pem
	I0317 11:14:04.140020  317731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116902.pem
	I0317 11:14:04.146490  317731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116902.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 11:14:04.154903  317731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 11:14:04.163227  317731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:14:04.166713  317731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:14:04.166757  317731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:14:04.172917  317731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 11:14:04.181141  317731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 11:14:04.184137  317731 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 11:14:04.184186  317731 kubeadm.go:392] StartCluster: {Name:old-k8s-version-702762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-702762 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:14:04.184264  317731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0317 11:14:04.184307  317731 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 11:14:04.216286  317731 cri.go:89] found id: ""
	I0317 11:14:04.216354  317731 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 11:14:04.224750  317731 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 11:14:04.232931  317731 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0317 11:14:04.232991  317731 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 11:14:04.240767  317731 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 11:14:04.240789  317731 kubeadm.go:157] found existing configuration files:
	
	I0317 11:14:04.240829  317731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 11:14:04.248731  317731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 11:14:04.248773  317731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 11:14:04.256046  317731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 11:14:04.263947  317731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 11:14:04.264011  317731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 11:14:04.271370  317731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 11:14:04.279204  317731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 11:14:04.279295  317731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 11:14:04.286869  317731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 11:14:04.294791  317731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 11:14:04.294834  317731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 11:14:04.302276  317731 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0317 11:14:04.357174  317731 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0317 11:14:04.357279  317731 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 11:14:04.395134  317731 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0317 11:14:04.395228  317731 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0317 11:14:04.395328  317731 kubeadm.go:310] OS: Linux
	I0317 11:14:04.395402  317731 kubeadm.go:310] CGROUPS_CPU: enabled
	I0317 11:14:04.395531  317731 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0317 11:14:04.395620  317731 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0317 11:14:04.395690  317731 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0317 11:14:04.395838  317731 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0317 11:14:04.395939  317731 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0317 11:14:04.396024  317731 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0317 11:14:04.396079  317731 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0317 11:14:04.481388  317731 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 11:14:04.481520  317731 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 11:14:04.481615  317731 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0317 11:14:04.650779  317731 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 11:14:04.653428  317731 out.go:235]   - Generating certificates and keys ...
	I0317 11:14:04.653534  317731 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 11:14:04.653652  317731 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 11:14:04.786371  317731 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 11:14:04.991285  317731 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 11:14:05.120303  317731 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 11:14:05.258332  317731 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 11:14:05.481325  317731 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 11:14:05.481523  317731 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-702762] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0317 11:14:05.835234  317731 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 11:14:05.835463  317731 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-702762] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0317 11:14:06.014438  317731 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 11:14:06.095603  317731 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 11:14:06.171781  317731 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 11:14:06.171956  317731 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 11:14:06.305081  317731 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 11:14:06.557968  317731 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 11:14:06.652757  317731 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 11:14:06.783620  317731 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 11:14:06.793852  317731 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 11:14:06.795023  317731 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 11:14:06.795111  317731 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 11:14:06.879419  317731 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 11:14:06.881183  317731 out.go:235]   - Booting up control plane ...
	I0317 11:14:06.881322  317731 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 11:14:06.887500  317731 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 11:14:06.888589  317731 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 11:14:06.889413  317731 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 11:14:06.891879  317731 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0317 11:14:18.894244  317731 kubeadm.go:310] [apiclient] All control plane components are healthy after 12.002364 seconds
	I0317 11:14:18.894440  317731 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 11:14:18.905193  317731 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 11:14:19.423107  317731 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 11:14:19.423403  317731 kubeadm.go:310] [mark-control-plane] Marking the node old-k8s-version-702762 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
	I0317 11:14:19.930702  317731 kubeadm.go:310] [bootstrap-token] Using token: h4ns8n.9vbrcf6tho0w852s
	I0317 11:14:19.931924  317731 out.go:235]   - Configuring RBAC rules ...
	I0317 11:14:19.932119  317731 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 11:14:19.936369  317731 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 11:14:19.945266  317731 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 11:14:19.948638  317731 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 11:14:19.950480  317731 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 11:14:19.952446  317731 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 11:14:20.003826  317731 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 11:14:20.200633  317731 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 11:14:20.344318  317731 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 11:14:20.345424  317731 kubeadm.go:310] 
	I0317 11:14:20.345509  317731 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 11:14:20.345517  317731 kubeadm.go:310] 
	I0317 11:14:20.345602  317731 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 11:14:20.345611  317731 kubeadm.go:310] 
	I0317 11:14:20.345641  317731 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 11:14:20.345732  317731 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 11:14:20.345805  317731 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 11:14:20.345815  317731 kubeadm.go:310] 
	I0317 11:14:20.345900  317731 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 11:14:20.345917  317731 kubeadm.go:310] 
	I0317 11:14:20.345997  317731 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 11:14:20.346016  317731 kubeadm.go:310] 
	I0317 11:14:20.346093  317731 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 11:14:20.346175  317731 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 11:14:20.346245  317731 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 11:14:20.346251  317731 kubeadm.go:310] 
	I0317 11:14:20.346364  317731 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 11:14:20.346472  317731 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 11:14:20.346480  317731 kubeadm.go:310] 
	I0317 11:14:20.346556  317731 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token h4ns8n.9vbrcf6tho0w852s \
	I0317 11:14:20.346651  317731 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:fbbd8e832ea7aa08371d4fcc88b71c8e29c98bed7a9a4feed9bf5043f7b52578 \
	I0317 11:14:20.346684  317731 kubeadm.go:310]     --control-plane 
	I0317 11:14:20.346694  317731 kubeadm.go:310] 
	I0317 11:14:20.346787  317731 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 11:14:20.346798  317731 kubeadm.go:310] 
	I0317 11:14:20.346892  317731 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token h4ns8n.9vbrcf6tho0w852s \
	I0317 11:14:20.347003  317731 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:fbbd8e832ea7aa08371d4fcc88b71c8e29c98bed7a9a4feed9bf5043f7b52578 
	I0317 11:14:20.349336  317731 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0317 11:14:20.349457  317731 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 11:14:20.349485  317731 cni.go:84] Creating CNI manager for ""
	I0317 11:14:20.349495  317731 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 11:14:20.350687  317731 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0317 11:14:20.351625  317731 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0317 11:14:20.355241  317731 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.20.0/kubectl ...
	I0317 11:14:20.355287  317731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0317 11:14:20.372998  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0317 11:14:20.697718  317731 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 11:14:20.697785  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:20.697799  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-702762 minikube.k8s.io/updated_at=2025_03_17T11_14_20_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76 minikube.k8s.io/name=old-k8s-version-702762 minikube.k8s.io/primary=true
	I0317 11:14:20.704816  317731 ops.go:34] apiserver oom_adj: -16
	I0317 11:14:20.810999  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:21.311125  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:21.811965  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:22.311304  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:22.811386  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:23.311912  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:23.811616  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:24.311098  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:24.811303  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:25.311747  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:25.812015  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:26.311632  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:26.811857  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:27.311121  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:27.812037  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:28.311573  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:28.811963  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:29.311634  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:29.811794  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:30.311381  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:30.811446  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:31.311405  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:31.811504  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:32.311623  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:32.811584  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:33.311453  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:33.811985  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:34.311387  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:34.812057  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:35.311916  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:35.811501  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:36.311360  317731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:14:36.458040  317731 kubeadm.go:1113] duration metric: took 15.760312449s to wait for elevateKubeSystemPrivileges
	I0317 11:14:36.458086  317731 kubeadm.go:394] duration metric: took 32.273901257s to StartCluster
	I0317 11:14:36.458108  317731 settings.go:142] acquiring lock: {Name:mk2a57d556efff40ccd4336229d7a78216b861f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:14:36.458188  317731 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 11:14:36.460048  317731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/kubeconfig: {Name:mk686b9f6159ab958672b945ae0aa5a9c96e9ecc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:14:36.460303  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 11:14:36.460313  317731 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 11:14:36.460360  317731 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 11:14:36.460441  317731 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-702762"
	I0317 11:14:36.460460  317731 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-702762"
	I0317 11:14:36.460477  317731 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-702762"
	I0317 11:14:36.460510  317731 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-702762"
	I0317 11:14:36.460514  317731 config.go:182] Loaded profile config "old-k8s-version-702762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0317 11:14:36.460485  317731 host.go:66] Checking if "old-k8s-version-702762" exists ...
	I0317 11:14:36.460899  317731 cli_runner.go:164] Run: docker container inspect old-k8s-version-702762 --format={{.State.Status}}
	I0317 11:14:36.461157  317731 cli_runner.go:164] Run: docker container inspect old-k8s-version-702762 --format={{.State.Status}}
	I0317 11:14:36.462846  317731 out.go:177] * Verifying Kubernetes components...
	I0317 11:14:36.464164  317731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:14:36.483497  317731 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-702762"
	I0317 11:14:36.483541  317731 host.go:66] Checking if "old-k8s-version-702762" exists ...
	I0317 11:14:36.483859  317731 cli_runner.go:164] Run: docker container inspect old-k8s-version-702762 --format={{.State.Status}}
	I0317 11:14:36.483861  317731 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:14:36.485060  317731 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 11:14:36.485081  317731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 11:14:36.485118  317731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-702762
	I0317 11:14:36.501408  317731 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 11:14:36.501928  317731 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 11:14:36.502035  317731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-702762
	I0317 11:14:36.507084  317731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/old-k8s-version-702762/id_rsa Username:docker}
	I0317 11:14:36.529229  317731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/old-k8s-version-702762/id_rsa Username:docker}
	I0317 11:14:36.728841  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 11:14:36.807663  317731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:14:36.917956  317731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 11:14:36.922455  317731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 11:14:37.450942  317731 start.go:971] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0317 11:14:37.452598  317731 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-702762" to be "Ready" ...
	I0317 11:14:37.510191  317731 node_ready.go:49] node "old-k8s-version-702762" has status "Ready":"True"
	I0317 11:14:37.510218  317731 node_ready.go:38] duration metric: took 57.590088ms for node "old-k8s-version-702762" to be "Ready" ...
	I0317 11:14:37.510230  317731 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:14:37.513962  317731 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-f5872" in "kube-system" namespace to be "Ready" ...
	I0317 11:14:37.732904  317731 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0317 11:14:37.734227  317731 addons.go:514] duration metric: took 1.273864423s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0317 11:14:37.955921  317731 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-702762" context rescaled to 1 replicas
	I0317 11:14:39.519737  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:41.520777  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:44.019430  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:46.519513  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:48.520001  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:51.019020  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:53.019601  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:55.019779  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:57.020114  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:14:59.519930  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:02.019307  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:04.020050  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:06.518699  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:08.519920  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:11.020401  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:13.519364  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:15.519607  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:18.018740  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:20.018785  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:22.583020  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:25.022111  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:27.518709  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:30.018063  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:32.018934  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:34.019377  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:36.019561  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:38.519353  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:40.519717  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:43.019313  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:45.519277  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:48.019234  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:50.024857  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:52.519182  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:54.519348  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:57.019018  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:59.019552  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:01.519001  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:03.519312  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:06.019374  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:08.519413  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:11.019077  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:13.019597  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:15.520489  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:18.019919  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:20.518527  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:22.518827  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:25.019463  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:27.518482  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:29.519609  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:32.018732  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:34.018984  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:36.518962  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:38.519160  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:41.019838  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:43.518568  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:45.518905  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:48.018864  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:50.019598  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:52.519683  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:55.018609  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:57.019404  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:59.019612  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:01.519826  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:04.020472  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:06.519792  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:09.019170  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:11.020405  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:13.519459  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:15.519569  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:18.019166  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:20.019462  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:22.519483  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:25.019334  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:27.518679  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:29.521435  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:32.019065  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:34.519079  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:37.019392  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:39.519078  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:42.018828  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:44.019743  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:46.518805  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:49.018617  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:51.018923  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:53.019366  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:55.519025  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:58.019838  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:00.020050  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:02.519745  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:05.018822  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:07.018949  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:09.021051  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:11.518599  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:13.519017  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:16.018768  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:18.019022  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:20.019225  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:22.518805  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:24.519381  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:26.519583  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:29.019190  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:31.019841  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:33.519597  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:36.019981  317731 pod_ready.go:103] pod "coredns-74ff55c5b-f5872" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:37.519080  317731 pod_ready.go:82] duration metric: took 4m0.005085219s for pod "coredns-74ff55c5b-f5872" in "kube-system" namespace to be "Ready" ...
	E0317 11:18:37.519108  317731 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0317 11:18:37.519115  317731 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-mm622" in "kube-system" namespace to be "Ready" ...
	I0317 11:18:37.520939  317731 pod_ready.go:98] error getting pod "coredns-74ff55c5b-mm622" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-mm622" not found
	I0317 11:18:37.520963  317731 pod_ready.go:82] duration metric: took 1.841832ms for pod "coredns-74ff55c5b-mm622" in "kube-system" namespace to be "Ready" ...
	E0317 11:18:37.520977  317731 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-74ff55c5b-mm622" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-mm622" not found
	I0317 11:18:37.520986  317731 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-702762" in "kube-system" namespace to be "Ready" ...
	I0317 11:18:37.524671  317731 pod_ready.go:93] pod "etcd-old-k8s-version-702762" in "kube-system" namespace has status "Ready":"True"
	I0317 11:18:37.524688  317731 pod_ready.go:82] duration metric: took 3.694226ms for pod "etcd-old-k8s-version-702762" in "kube-system" namespace to be "Ready" ...
	I0317 11:18:37.524699  317731 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-702762" in "kube-system" namespace to be "Ready" ...
	I0317 11:18:37.528169  317731 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-702762" in "kube-system" namespace has status "Ready":"True"
	I0317 11:18:37.528182  317731 pod_ready.go:82] duration metric: took 3.477413ms for pod "kube-apiserver-old-k8s-version-702762" in "kube-system" namespace to be "Ready" ...
	I0317 11:18:37.528192  317731 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-702762" in "kube-system" namespace to be "Ready" ...
	I0317 11:18:37.531460  317731 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-702762" in "kube-system" namespace has status "Ready":"True"
	I0317 11:18:37.531479  317731 pod_ready.go:82] duration metric: took 3.279989ms for pod "kube-controller-manager-old-k8s-version-702762" in "kube-system" namespace to be "Ready" ...
	I0317 11:18:37.531490  317731 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-l5hsd" in "kube-system" namespace to be "Ready" ...
	I0317 11:18:37.717993  317731 pod_ready.go:93] pod "kube-proxy-l5hsd" in "kube-system" namespace has status "Ready":"True"
	I0317 11:18:37.718024  317731 pod_ready.go:82] duration metric: took 186.520644ms for pod "kube-proxy-l5hsd" in "kube-system" namespace to be "Ready" ...
	I0317 11:18:37.718041  317731 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-702762" in "kube-system" namespace to be "Ready" ...
	I0317 11:18:38.117905  317731 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-702762" in "kube-system" namespace has status "Ready":"True"
	I0317 11:18:38.117934  317731 pod_ready.go:82] duration metric: took 399.881946ms for pod "kube-scheduler-old-k8s-version-702762" in "kube-system" namespace to be "Ready" ...
	I0317 11:18:38.117947  317731 pod_ready.go:39] duration metric: took 4m0.60770158s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:18:38.117971  317731 api_server.go:52] waiting for apiserver process to appear ...
	I0317 11:18:38.118010  317731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:18:38.118059  317731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:18:38.156836  317731 cri.go:89] found id: "47066ea1751e9e20dd6a74d3a99c7f36513aa5d027d2802ec3f01e80f93fbc41"
	I0317 11:18:38.156857  317731 cri.go:89] found id: ""
	I0317 11:18:38.156864  317731 logs.go:282] 1 containers: [47066ea1751e9e20dd6a74d3a99c7f36513aa5d027d2802ec3f01e80f93fbc41]
	I0317 11:18:38.156905  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:18:38.160672  317731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:18:38.160742  317731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:18:38.194830  317731 cri.go:89] found id: "dfb07a4e8ade42dce7c7c126f3f1897f64989b7b5be5fc8c3573b4b2e8dcaf2f"
	I0317 11:18:38.194851  317731 cri.go:89] found id: ""
	I0317 11:18:38.194859  317731 logs.go:282] 1 containers: [dfb07a4e8ade42dce7c7c126f3f1897f64989b7b5be5fc8c3573b4b2e8dcaf2f]
	I0317 11:18:38.194914  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:18:38.198168  317731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:18:38.198246  317731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:18:38.232739  317731 cri.go:89] found id: ""
	I0317 11:18:38.232764  317731 logs.go:282] 0 containers: []
	W0317 11:18:38.232774  317731 logs.go:284] No container was found matching "coredns"
	I0317 11:18:38.232780  317731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:18:38.232836  317731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:18:38.268302  317731 cri.go:89] found id: "cfbb7f23faf719e3ee66d1df205cf2273ff01afc6fb18222ffd416860d1d5827"
	I0317 11:18:38.268325  317731 cri.go:89] found id: ""
	I0317 11:18:38.268331  317731 logs.go:282] 1 containers: [cfbb7f23faf719e3ee66d1df205cf2273ff01afc6fb18222ffd416860d1d5827]
	I0317 11:18:38.268377  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:18:38.271887  317731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:18:38.271954  317731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:18:38.308642  317731 cri.go:89] found id: "66cb4bfe01314cc7b4b02ff61b35d6e975585645fb7d4e84830af03ea85f5e12"
	I0317 11:18:38.308665  317731 cri.go:89] found id: ""
	I0317 11:18:38.308673  317731 logs.go:282] 1 containers: [66cb4bfe01314cc7b4b02ff61b35d6e975585645fb7d4e84830af03ea85f5e12]
	I0317 11:18:38.308728  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:18:38.312579  317731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:18:38.312643  317731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:18:38.345659  317731 cri.go:89] found id: "344bb10a5d4266cae18772028c99c4d22380f44d96ad9df7167017a219b8fd72"
	I0317 11:18:38.345686  317731 cri.go:89] found id: ""
	I0317 11:18:38.345694  317731 logs.go:282] 1 containers: [344bb10a5d4266cae18772028c99c4d22380f44d96ad9df7167017a219b8fd72]
	I0317 11:18:38.345747  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:18:38.349338  317731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:18:38.349408  317731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:18:38.384472  317731 cri.go:89] found id: ""
	I0317 11:18:38.384500  317731 logs.go:282] 0 containers: []
	W0317 11:18:38.384512  317731 logs.go:284] No container was found matching "kindnet"
	I0317 11:18:38.384528  317731 logs.go:123] Gathering logs for kubelet ...
	I0317 11:18:38.384542  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0317 11:18:38.444994  317731 logs.go:138] Found kubelet problem: Mar 17 11:14:39 old-k8s-version-702762 kubelet[2107]: E0317 11:14:39.932492    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	W0317 11:18:38.445143  317731 logs.go:138] Found kubelet problem: Mar 17 11:14:40 old-k8s-version-702762 kubelet[2107]: E0317 11:14:40.629463    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:38.448187  317731 logs.go:138] Found kubelet problem: Mar 17 11:14:56 old-k8s-version-702762 kubelet[2107]: E0317 11:14:56.817270    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	W0317 11:18:38.449529  317731 logs.go:138] Found kubelet problem: Mar 17 11:15:11 old-k8s-version-702762 kubelet[2107]: E0317 11:15:11.542732    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:38.453443  317731 logs.go:138] Found kubelet problem: Mar 17 11:15:26 old-k8s-version-702762 kubelet[2107]: E0317 11:15:26.806183    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	W0317 11:18:38.454899  317731 logs.go:138] Found kubelet problem: Mar 17 11:15:38 old-k8s-version-702762 kubelet[2107]: E0317 11:15:38.542452    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:38.456322  317731 logs.go:138] Found kubelet problem: Mar 17 11:15:51 old-k8s-version-702762 kubelet[2107]: E0317 11:15:51.542450    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:38.457427  317731 logs.go:138] Found kubelet problem: Mar 17 11:16:06 old-k8s-version-702762 kubelet[2107]: E0317 11:16:06.542562    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:38.460393  317731 logs.go:138] Found kubelet problem: Mar 17 11:16:20 old-k8s-version-702762 kubelet[2107]: E0317 11:16:20.791224    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	W0317 11:18:38.462453  317731 logs.go:138] Found kubelet problem: Mar 17 11:16:35 old-k8s-version-702762 kubelet[2107]: E0317 11:16:35.542528    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:38.463579  317731 logs.go:138] Found kubelet problem: Mar 17 11:16:48 old-k8s-version-702762 kubelet[2107]: E0317 11:16:48.542328    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:38.464699  317731 logs.go:138] Found kubelet problem: Mar 17 11:17:00 old-k8s-version-702762 kubelet[2107]: E0317 11:17:00.542357    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:38.465793  317731 logs.go:138] Found kubelet problem: Mar 17 11:17:12 old-k8s-version-702762 kubelet[2107]: E0317 11:17:12.542451    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:38.466889  317731 logs.go:138] Found kubelet problem: Mar 17 11:17:26 old-k8s-version-702762 kubelet[2107]: E0317 11:17:26.542467    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:38.467030  317731 logs.go:138] Found kubelet problem: Mar 17 11:17:38 old-k8s-version-702762 kubelet[2107]: E0317 11:17:38.542445    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:38.470948  317731 logs.go:138] Found kubelet problem: Mar 17 11:17:53 old-k8s-version-702762 kubelet[2107]: E0317 11:17:53.805608    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	W0317 11:18:38.471104  317731 logs.go:138] Found kubelet problem: Mar 17 11:18:07 old-k8s-version-702762 kubelet[2107]: E0317 11:18:07.542800    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:38.472270  317731 logs.go:138] Found kubelet problem: Mar 17 11:18:18 old-k8s-version-702762 kubelet[2107]: E0317 11:18:18.542355    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:38.474442  317731 logs.go:138] Found kubelet problem: Mar 17 11:18:33 old-k8s-version-702762 kubelet[2107]: E0317 11:18:33.542756    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	I0317 11:18:38.474458  317731 logs.go:123] Gathering logs for kube-apiserver [47066ea1751e9e20dd6a74d3a99c7f36513aa5d027d2802ec3f01e80f93fbc41] ...
	I0317 11:18:38.474477  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47066ea1751e9e20dd6a74d3a99c7f36513aa5d027d2802ec3f01e80f93fbc41"
	I0317 11:18:38.524448  317731 logs.go:123] Gathering logs for etcd [dfb07a4e8ade42dce7c7c126f3f1897f64989b7b5be5fc8c3573b4b2e8dcaf2f] ...
	I0317 11:18:38.524496  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfb07a4e8ade42dce7c7c126f3f1897f64989b7b5be5fc8c3573b4b2e8dcaf2f"
	I0317 11:18:38.562251  317731 logs.go:123] Gathering logs for containerd ...
	I0317 11:18:38.562281  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:18:38.625000  317731 logs.go:123] Gathering logs for container status ...
	I0317 11:18:38.625038  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:18:38.664116  317731 logs.go:123] Gathering logs for dmesg ...
	I0317 11:18:38.664145  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:18:38.684277  317731 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:18:38.684307  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:18:38.781689  317731 logs.go:123] Gathering logs for kube-scheduler [cfbb7f23faf719e3ee66d1df205cf2273ff01afc6fb18222ffd416860d1d5827] ...
	I0317 11:18:38.781722  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cfbb7f23faf719e3ee66d1df205cf2273ff01afc6fb18222ffd416860d1d5827"
	I0317 11:18:38.822476  317731 logs.go:123] Gathering logs for kube-proxy [66cb4bfe01314cc7b4b02ff61b35d6e975585645fb7d4e84830af03ea85f5e12] ...
	I0317 11:18:38.822519  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66cb4bfe01314cc7b4b02ff61b35d6e975585645fb7d4e84830af03ea85f5e12"
	I0317 11:18:38.855846  317731 logs.go:123] Gathering logs for kube-controller-manager [344bb10a5d4266cae18772028c99c4d22380f44d96ad9df7167017a219b8fd72] ...
	I0317 11:18:38.855874  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 344bb10a5d4266cae18772028c99c4d22380f44d96ad9df7167017a219b8fd72"
	I0317 11:18:38.900732  317731 out.go:358] Setting ErrFile to fd 2...
	I0317 11:18:38.900757  317731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0317 11:18:38.900812  317731 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0317 11:18:38.900824  317731 out.go:270]   Mar 17 11:17:38 old-k8s-version-702762 kubelet[2107]: E0317 11:17:38.542445    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	  Mar 17 11:17:38 old-k8s-version-702762 kubelet[2107]: E0317 11:17:38.542445    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:38.900839  317731 out.go:270]   Mar 17 11:17:53 old-k8s-version-702762 kubelet[2107]: E0317 11:17:53.805608    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	  Mar 17 11:17:53 old-k8s-version-702762 kubelet[2107]: E0317 11:17:53.805608    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	W0317 11:18:38.900848  317731 out.go:270]   Mar 17 11:18:07 old-k8s-version-702762 kubelet[2107]: E0317 11:18:07.542800    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	  Mar 17 11:18:07 old-k8s-version-702762 kubelet[2107]: E0317 11:18:07.542800    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:38.900854  317731 out.go:270]   Mar 17 11:18:18 old-k8s-version-702762 kubelet[2107]: E0317 11:18:18.542355    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	  Mar 17 11:18:18 old-k8s-version-702762 kubelet[2107]: E0317 11:18:18.542355    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:38.900859  317731 out.go:270]   Mar 17 11:18:33 old-k8s-version-702762 kubelet[2107]: E0317 11:18:33.542756    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	  Mar 17 11:18:33 old-k8s-version-702762 kubelet[2107]: E0317 11:18:33.542756    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	I0317 11:18:38.900865  317731 out.go:358] Setting ErrFile to fd 2...
	I0317 11:18:38.900874  317731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:18:48.903019  317731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 11:18:48.914275  317731 api_server.go:72] duration metric: took 4m12.453927752s to wait for apiserver process to appear ...
	I0317 11:18:48.914308  317731 api_server.go:88] waiting for apiserver healthz status ...
	I0317 11:18:48.914345  317731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:18:48.914401  317731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:18:48.947448  317731 cri.go:89] found id: "47066ea1751e9e20dd6a74d3a99c7f36513aa5d027d2802ec3f01e80f93fbc41"
	I0317 11:18:48.947479  317731 cri.go:89] found id: ""
	I0317 11:18:48.947489  317731 logs.go:282] 1 containers: [47066ea1751e9e20dd6a74d3a99c7f36513aa5d027d2802ec3f01e80f93fbc41]
	I0317 11:18:48.947536  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:18:48.951455  317731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:18:48.951523  317731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:18:48.984116  317731 cri.go:89] found id: "dfb07a4e8ade42dce7c7c126f3f1897f64989b7b5be5fc8c3573b4b2e8dcaf2f"
	I0317 11:18:48.984137  317731 cri.go:89] found id: ""
	I0317 11:18:48.984144  317731 logs.go:282] 1 containers: [dfb07a4e8ade42dce7c7c126f3f1897f64989b7b5be5fc8c3573b4b2e8dcaf2f]
	I0317 11:18:48.984185  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:18:48.987590  317731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:18:48.987661  317731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:18:49.021340  317731 cri.go:89] found id: ""
	I0317 11:18:49.021366  317731 logs.go:282] 0 containers: []
	W0317 11:18:49.021374  317731 logs.go:284] No container was found matching "coredns"
	I0317 11:18:49.021379  317731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:18:49.021424  317731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:18:49.055649  317731 cri.go:89] found id: "cfbb7f23faf719e3ee66d1df205cf2273ff01afc6fb18222ffd416860d1d5827"
	I0317 11:18:49.055676  317731 cri.go:89] found id: ""
	I0317 11:18:49.055688  317731 logs.go:282] 1 containers: [cfbb7f23faf719e3ee66d1df205cf2273ff01afc6fb18222ffd416860d1d5827]
	I0317 11:18:49.055752  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:18:49.059396  317731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:18:49.059467  317731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:18:49.094511  317731 cri.go:89] found id: "66cb4bfe01314cc7b4b02ff61b35d6e975585645fb7d4e84830af03ea85f5e12"
	I0317 11:18:49.094536  317731 cri.go:89] found id: ""
	I0317 11:18:49.094544  317731 logs.go:282] 1 containers: [66cb4bfe01314cc7b4b02ff61b35d6e975585645fb7d4e84830af03ea85f5e12]
	I0317 11:18:49.094602  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:18:49.098077  317731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:18:49.098143  317731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:18:49.131902  317731 cri.go:89] found id: "344bb10a5d4266cae18772028c99c4d22380f44d96ad9df7167017a219b8fd72"
	I0317 11:18:49.131925  317731 cri.go:89] found id: ""
	I0317 11:18:49.131935  317731 logs.go:282] 1 containers: [344bb10a5d4266cae18772028c99c4d22380f44d96ad9df7167017a219b8fd72]
	I0317 11:18:49.131993  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:18:49.136226  317731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:18:49.136297  317731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:18:49.168814  317731 cri.go:89] found id: ""
	I0317 11:18:49.168840  317731 logs.go:282] 0 containers: []
	W0317 11:18:49.168848  317731 logs.go:284] No container was found matching "kindnet"
	I0317 11:18:49.168861  317731 logs.go:123] Gathering logs for kubelet ...
	I0317 11:18:49.168873  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0317 11:18:49.227360  317731 logs.go:138] Found kubelet problem: Mar 17 11:14:39 old-k8s-version-702762 kubelet[2107]: E0317 11:14:39.932492    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	W0317 11:18:49.227589  317731 logs.go:138] Found kubelet problem: Mar 17 11:14:40 old-k8s-version-702762 kubelet[2107]: E0317 11:14:40.629463    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:49.230783  317731 logs.go:138] Found kubelet problem: Mar 17 11:14:56 old-k8s-version-702762 kubelet[2107]: E0317 11:14:56.817270    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	W0317 11:18:49.231884  317731 logs.go:138] Found kubelet problem: Mar 17 11:15:11 old-k8s-version-702762 kubelet[2107]: E0317 11:15:11.542732    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:49.235044  317731 logs.go:138] Found kubelet problem: Mar 17 11:15:26 old-k8s-version-702762 kubelet[2107]: E0317 11:15:26.806183    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	W0317 11:18:49.236171  317731 logs.go:138] Found kubelet problem: Mar 17 11:15:38 old-k8s-version-702762 kubelet[2107]: E0317 11:15:38.542452    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:49.237241  317731 logs.go:138] Found kubelet problem: Mar 17 11:15:51 old-k8s-version-702762 kubelet[2107]: E0317 11:15:51.542450    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:49.238307  317731 logs.go:138] Found kubelet problem: Mar 17 11:16:06 old-k8s-version-702762 kubelet[2107]: E0317 11:16:06.542562    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:49.241199  317731 logs.go:138] Found kubelet problem: Mar 17 11:16:20 old-k8s-version-702762 kubelet[2107]: E0317 11:16:20.791224    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	W0317 11:18:49.243643  317731 logs.go:138] Found kubelet problem: Mar 17 11:16:35 old-k8s-version-702762 kubelet[2107]: E0317 11:16:35.542528    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:49.244730  317731 logs.go:138] Found kubelet problem: Mar 17 11:16:48 old-k8s-version-702762 kubelet[2107]: E0317 11:16:48.542328    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:49.245802  317731 logs.go:138] Found kubelet problem: Mar 17 11:17:00 old-k8s-version-702762 kubelet[2107]: E0317 11:17:00.542357    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:49.246869  317731 logs.go:138] Found kubelet problem: Mar 17 11:17:12 old-k8s-version-702762 kubelet[2107]: E0317 11:17:12.542451    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:49.248026  317731 logs.go:138] Found kubelet problem: Mar 17 11:17:26 old-k8s-version-702762 kubelet[2107]: E0317 11:17:26.542467    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:49.248185  317731 logs.go:138] Found kubelet problem: Mar 17 11:17:38 old-k8s-version-702762 kubelet[2107]: E0317 11:17:38.542445    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:49.252061  317731 logs.go:138] Found kubelet problem: Mar 17 11:17:53 old-k8s-version-702762 kubelet[2107]: E0317 11:17:53.805608    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	W0317 11:18:49.252191  317731 logs.go:138] Found kubelet problem: Mar 17 11:18:07 old-k8s-version-702762 kubelet[2107]: E0317 11:18:07.542800    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:49.253256  317731 logs.go:138] Found kubelet problem: Mar 17 11:18:18 old-k8s-version-702762 kubelet[2107]: E0317 11:18:18.542355    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:49.255319  317731 logs.go:138] Found kubelet problem: Mar 17 11:18:33 old-k8s-version-702762 kubelet[2107]: E0317 11:18:33.542756    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:49.255446  317731 logs.go:138] Found kubelet problem: Mar 17 11:18:45 old-k8s-version-702762 kubelet[2107]: E0317 11:18:45.545201    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	I0317 11:18:49.256402  317731 logs.go:123] Gathering logs for kube-apiserver [47066ea1751e9e20dd6a74d3a99c7f36513aa5d027d2802ec3f01e80f93fbc41] ...
	I0317 11:18:49.256421  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47066ea1751e9e20dd6a74d3a99c7f36513aa5d027d2802ec3f01e80f93fbc41"
	I0317 11:18:49.301899  317731 logs.go:123] Gathering logs for etcd [dfb07a4e8ade42dce7c7c126f3f1897f64989b7b5be5fc8c3573b4b2e8dcaf2f] ...
	I0317 11:18:49.301930  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfb07a4e8ade42dce7c7c126f3f1897f64989b7b5be5fc8c3573b4b2e8dcaf2f"
	I0317 11:18:49.339552  317731 logs.go:123] Gathering logs for dmesg ...
	I0317 11:18:49.339588  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:18:49.360831  317731 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:18:49.360871  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:18:49.456578  317731 logs.go:123] Gathering logs for kube-scheduler [cfbb7f23faf719e3ee66d1df205cf2273ff01afc6fb18222ffd416860d1d5827] ...
	I0317 11:18:49.456607  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cfbb7f23faf719e3ee66d1df205cf2273ff01afc6fb18222ffd416860d1d5827"
	I0317 11:18:49.493964  317731 logs.go:123] Gathering logs for kube-proxy [66cb4bfe01314cc7b4b02ff61b35d6e975585645fb7d4e84830af03ea85f5e12] ...
	I0317 11:18:49.493998  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66cb4bfe01314cc7b4b02ff61b35d6e975585645fb7d4e84830af03ea85f5e12"
	I0317 11:18:49.528805  317731 logs.go:123] Gathering logs for kube-controller-manager [344bb10a5d4266cae18772028c99c4d22380f44d96ad9df7167017a219b8fd72] ...
	I0317 11:18:49.528852  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 344bb10a5d4266cae18772028c99c4d22380f44d96ad9df7167017a219b8fd72"
	I0317 11:18:49.572883  317731 logs.go:123] Gathering logs for containerd ...
	I0317 11:18:49.572914  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:18:49.629760  317731 logs.go:123] Gathering logs for container status ...
	I0317 11:18:49.629792  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:18:49.666624  317731 out.go:358] Setting ErrFile to fd 2...
	I0317 11:18:49.666646  317731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0317 11:18:49.666703  317731 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0317 11:18:49.666713  317731 out.go:270]   Mar 17 11:17:53 old-k8s-version-702762 kubelet[2107]: E0317 11:17:53.805608    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	  Mar 17 11:17:53 old-k8s-version-702762 kubelet[2107]: E0317 11:17:53.805608    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	W0317 11:18:49.666724  317731 out.go:270]   Mar 17 11:18:07 old-k8s-version-702762 kubelet[2107]: E0317 11:18:07.542800    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	  Mar 17 11:18:07 old-k8s-version-702762 kubelet[2107]: E0317 11:18:07.542800    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:49.666736  317731 out.go:270]   Mar 17 11:18:18 old-k8s-version-702762 kubelet[2107]: E0317 11:18:18.542355    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	  Mar 17 11:18:18 old-k8s-version-702762 kubelet[2107]: E0317 11:18:18.542355    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:49.666744  317731 out.go:270]   Mar 17 11:18:33 old-k8s-version-702762 kubelet[2107]: E0317 11:18:33.542756    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	  Mar 17 11:18:33 old-k8s-version-702762 kubelet[2107]: E0317 11:18:33.542756    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:18:49.666752  317731 out.go:270]   Mar 17 11:18:45 old-k8s-version-702762 kubelet[2107]: E0317 11:18:45.545201    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	  Mar 17 11:18:45 old-k8s-version-702762 kubelet[2107]: E0317 11:18:45.545201    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	I0317 11:18:49.666759  317731 out.go:358] Setting ErrFile to fd 2...
	I0317 11:18:49.666767  317731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:18:59.667582  317731 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0317 11:18:59.674185  317731 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0317 11:18:59.675018  317731 api_server.go:141] control plane version: v1.20.0
	I0317 11:18:59.675045  317731 api_server.go:131] duration metric: took 10.760730893s to wait for apiserver health ...
	I0317 11:18:59.675054  317731 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 11:18:59.675071  317731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:18:59.675114  317731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:18:59.709384  317731 cri.go:89] found id: "47066ea1751e9e20dd6a74d3a99c7f36513aa5d027d2802ec3f01e80f93fbc41"
	I0317 11:18:59.709410  317731 cri.go:89] found id: ""
	I0317 11:18:59.709421  317731 logs.go:282] 1 containers: [47066ea1751e9e20dd6a74d3a99c7f36513aa5d027d2802ec3f01e80f93fbc41]
	I0317 11:18:59.709472  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:18:59.713229  317731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:18:59.713311  317731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:18:59.748289  317731 cri.go:89] found id: "dfb07a4e8ade42dce7c7c126f3f1897f64989b7b5be5fc8c3573b4b2e8dcaf2f"
	I0317 11:18:59.748310  317731 cri.go:89] found id: ""
	I0317 11:18:59.748317  317731 logs.go:282] 1 containers: [dfb07a4e8ade42dce7c7c126f3f1897f64989b7b5be5fc8c3573b4b2e8dcaf2f]
	I0317 11:18:59.748369  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:18:59.752004  317731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:18:59.752080  317731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:18:59.783350  317731 cri.go:89] found id: ""
	I0317 11:18:59.783378  317731 logs.go:282] 0 containers: []
	W0317 11:18:59.783388  317731 logs.go:284] No container was found matching "coredns"
	I0317 11:18:59.783394  317731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:18:59.783438  317731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:18:59.814771  317731 cri.go:89] found id: "cfbb7f23faf719e3ee66d1df205cf2273ff01afc6fb18222ffd416860d1d5827"
	I0317 11:18:59.814796  317731 cri.go:89] found id: ""
	I0317 11:18:59.814803  317731 logs.go:282] 1 containers: [cfbb7f23faf719e3ee66d1df205cf2273ff01afc6fb18222ffd416860d1d5827]
	I0317 11:18:59.814861  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:18:59.818257  317731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:18:59.818308  317731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:18:59.850163  317731 cri.go:89] found id: "66cb4bfe01314cc7b4b02ff61b35d6e975585645fb7d4e84830af03ea85f5e12"
	I0317 11:18:59.850182  317731 cri.go:89] found id: ""
	I0317 11:18:59.850189  317731 logs.go:282] 1 containers: [66cb4bfe01314cc7b4b02ff61b35d6e975585645fb7d4e84830af03ea85f5e12]
	I0317 11:18:59.850228  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:18:59.853743  317731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:18:59.853825  317731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:18:59.888756  317731 cri.go:89] found id: "344bb10a5d4266cae18772028c99c4d22380f44d96ad9df7167017a219b8fd72"
	I0317 11:18:59.888780  317731 cri.go:89] found id: ""
	I0317 11:18:59.888789  317731 logs.go:282] 1 containers: [344bb10a5d4266cae18772028c99c4d22380f44d96ad9df7167017a219b8fd72]
	I0317 11:18:59.888856  317731 ssh_runner.go:195] Run: which crictl
	I0317 11:18:59.892227  317731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:18:59.892280  317731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:18:59.925109  317731 cri.go:89] found id: ""
	I0317 11:18:59.925135  317731 logs.go:282] 0 containers: []
	W0317 11:18:59.925143  317731 logs.go:284] No container was found matching "kindnet"
	I0317 11:18:59.925159  317731 logs.go:123] Gathering logs for etcd [dfb07a4e8ade42dce7c7c126f3f1897f64989b7b5be5fc8c3573b4b2e8dcaf2f] ...
	I0317 11:18:59.925172  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfb07a4e8ade42dce7c7c126f3f1897f64989b7b5be5fc8c3573b4b2e8dcaf2f"
	I0317 11:18:59.962181  317731 logs.go:123] Gathering logs for kube-scheduler [cfbb7f23faf719e3ee66d1df205cf2273ff01afc6fb18222ffd416860d1d5827] ...
	I0317 11:18:59.962208  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cfbb7f23faf719e3ee66d1df205cf2273ff01afc6fb18222ffd416860d1d5827"
	I0317 11:18:59.997544  317731 logs.go:123] Gathering logs for kubelet ...
	I0317 11:18:59.997574  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0317 11:19:00.062731  317731 logs.go:138] Found kubelet problem: Mar 17 11:14:39 old-k8s-version-702762 kubelet[2107]: E0317 11:14:39.932492    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	W0317 11:19:00.062879  317731 logs.go:138] Found kubelet problem: Mar 17 11:14:40 old-k8s-version-702762 kubelet[2107]: E0317 11:14:40.629463    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:19:00.065914  317731 logs.go:138] Found kubelet problem: Mar 17 11:14:56 old-k8s-version-702762 kubelet[2107]: E0317 11:14:56.817270    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	W0317 11:19:00.067033  317731 logs.go:138] Found kubelet problem: Mar 17 11:15:11 old-k8s-version-702762 kubelet[2107]: E0317 11:15:11.542732    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:19:00.070164  317731 logs.go:138] Found kubelet problem: Mar 17 11:15:26 old-k8s-version-702762 kubelet[2107]: E0317 11:15:26.806183    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	W0317 11:19:00.071335  317731 logs.go:138] Found kubelet problem: Mar 17 11:15:38 old-k8s-version-702762 kubelet[2107]: E0317 11:15:38.542452    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:19:00.072444  317731 logs.go:138] Found kubelet problem: Mar 17 11:15:51 old-k8s-version-702762 kubelet[2107]: E0317 11:15:51.542450    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:19:00.073531  317731 logs.go:138] Found kubelet problem: Mar 17 11:16:06 old-k8s-version-702762 kubelet[2107]: E0317 11:16:06.542562    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:19:00.076538  317731 logs.go:138] Found kubelet problem: Mar 17 11:16:20 old-k8s-version-702762 kubelet[2107]: E0317 11:16:20.791224    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	W0317 11:19:00.078629  317731 logs.go:138] Found kubelet problem: Mar 17 11:16:35 old-k8s-version-702762 kubelet[2107]: E0317 11:16:35.542528    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:19:00.079816  317731 logs.go:138] Found kubelet problem: Mar 17 11:16:48 old-k8s-version-702762 kubelet[2107]: E0317 11:16:48.542328    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:19:00.080911  317731 logs.go:138] Found kubelet problem: Mar 17 11:17:00 old-k8s-version-702762 kubelet[2107]: E0317 11:17:00.542357    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:19:00.081993  317731 logs.go:138] Found kubelet problem: Mar 17 11:17:12 old-k8s-version-702762 kubelet[2107]: E0317 11:17:12.542451    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:19:00.083077  317731 logs.go:138] Found kubelet problem: Mar 17 11:17:26 old-k8s-version-702762 kubelet[2107]: E0317 11:17:26.542467    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:19:00.083203  317731 logs.go:138] Found kubelet problem: Mar 17 11:17:38 old-k8s-version-702762 kubelet[2107]: E0317 11:17:38.542445    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:19:00.087130  317731 logs.go:138] Found kubelet problem: Mar 17 11:17:53 old-k8s-version-702762 kubelet[2107]: E0317 11:17:53.805608    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	W0317 11:19:00.087273  317731 logs.go:138] Found kubelet problem: Mar 17 11:18:07 old-k8s-version-702762 kubelet[2107]: E0317 11:18:07.542800    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:19:00.088399  317731 logs.go:138] Found kubelet problem: Mar 17 11:18:18 old-k8s-version-702762 kubelet[2107]: E0317 11:18:18.542355    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:19:00.090509  317731 logs.go:138] Found kubelet problem: Mar 17 11:18:33 old-k8s-version-702762 kubelet[2107]: E0317 11:18:33.542756    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:19:00.090655  317731 logs.go:138] Found kubelet problem: Mar 17 11:18:45 old-k8s-version-702762 kubelet[2107]: E0317 11:18:45.545201    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:19:00.091942  317731 logs.go:138] Found kubelet problem: Mar 17 11:18:58 old-k8s-version-702762 kubelet[2107]: E0317 11:18:58.542413    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	I0317 11:19:00.093180  317731 logs.go:123] Gathering logs for kube-apiserver [47066ea1751e9e20dd6a74d3a99c7f36513aa5d027d2802ec3f01e80f93fbc41] ...
	I0317 11:19:00.093208  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47066ea1751e9e20dd6a74d3a99c7f36513aa5d027d2802ec3f01e80f93fbc41"
	I0317 11:19:00.139942  317731 logs.go:123] Gathering logs for kube-proxy [66cb4bfe01314cc7b4b02ff61b35d6e975585645fb7d4e84830af03ea85f5e12] ...
	I0317 11:19:00.139971  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66cb4bfe01314cc7b4b02ff61b35d6e975585645fb7d4e84830af03ea85f5e12"
	I0317 11:19:00.174597  317731 logs.go:123] Gathering logs for kube-controller-manager [344bb10a5d4266cae18772028c99c4d22380f44d96ad9df7167017a219b8fd72] ...
	I0317 11:19:00.174624  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 344bb10a5d4266cae18772028c99c4d22380f44d96ad9df7167017a219b8fd72"
	I0317 11:19:00.218881  317731 logs.go:123] Gathering logs for containerd ...
	I0317 11:19:00.218913  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:19:00.278691  317731 logs.go:123] Gathering logs for container status ...
	I0317 11:19:00.278735  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:19:00.313594  317731 logs.go:123] Gathering logs for dmesg ...
	I0317 11:19:00.313621  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:19:00.333680  317731 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:19:00.333707  317731 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:19:00.424432  317731 out.go:358] Setting ErrFile to fd 2...
	I0317 11:19:00.424457  317731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0317 11:19:00.424504  317731 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0317 11:19:00.424515  317731 out.go:270]   Mar 17 11:18:07 old-k8s-version-702762 kubelet[2107]: E0317 11:18:07.542800    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	  Mar 17 11:18:07 old-k8s-version-702762 kubelet[2107]: E0317 11:18:07.542800    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:19:00.424521  317731 out.go:270]   Mar 17 11:18:18 old-k8s-version-702762 kubelet[2107]: E0317 11:18:18.542355    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	  Mar 17 11:18:18 old-k8s-version-702762 kubelet[2107]: E0317 11:18:18.542355    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:19:00.424533  317731 out.go:270]   Mar 17 11:18:33 old-k8s-version-702762 kubelet[2107]: E0317 11:18:33.542756    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	  Mar 17 11:18:33 old-k8s-version-702762 kubelet[2107]: E0317 11:18:33.542756    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:19:00.424539  317731 out.go:270]   Mar 17 11:18:45 old-k8s-version-702762 kubelet[2107]: E0317 11:18:45.545201    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	  Mar 17 11:18:45 old-k8s-version-702762 kubelet[2107]: E0317 11:18:45.545201    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	W0317 11:19:00.424546  317731 out.go:270]   Mar 17 11:18:58 old-k8s-version-702762 kubelet[2107]: E0317 11:18:58.542413    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	  Mar 17 11:18:58 old-k8s-version-702762 kubelet[2107]: E0317 11:18:58.542413    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	I0317 11:19:00.424551  317731 out.go:358] Setting ErrFile to fd 2...
	I0317 11:19:00.424556  317731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:19:10.428549  317731 system_pods.go:59] 8 kube-system pods found
	I0317 11:19:10.428599  317731 system_pods.go:61] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:10.428611  317731 system_pods.go:61] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:19:10.428625  317731 system_pods.go:61] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:10.428635  317731 system_pods.go:61] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:19:10.428640  317731 system_pods.go:61] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:19:10.428645  317731 system_pods.go:61] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:19:10.428649  317731 system_pods.go:61] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:19:10.428654  317731 system_pods.go:61] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:19:10.428661  317731 system_pods.go:74] duration metric: took 10.753600992s to wait for pod list to return data ...
	I0317 11:19:10.428671  317731 default_sa.go:34] waiting for default service account to be created ...
	I0317 11:19:10.430301  317731 default_sa.go:45] found service account: "default"
	I0317 11:19:10.430318  317731 default_sa.go:55] duration metric: took 1.640394ms for default service account to be created ...
	I0317 11:19:10.430325  317731 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 11:19:10.433051  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:10.433079  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:10.433086  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:19:10.433094  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:10.433098  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:19:10.433105  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:19:10.433111  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:19:10.433115  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:19:10.433122  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:19:10.433148  317731 retry.go:31] will retry after 238.5328ms: missing components: kube-dns
	I0317 11:19:10.675430  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:10.675469  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:10.675475  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:19:10.675482  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:10.675487  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:19:10.675491  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:19:10.675494  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:19:10.675497  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:19:10.675500  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:19:10.675514  317731 retry.go:31] will retry after 321.920092ms: missing components: kube-dns
	I0317 11:19:11.001482  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:11.001516  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:11.001523  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:19:11.001531  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:11.001535  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:19:11.001541  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:19:11.001556  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:19:11.001560  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:19:11.001564  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:19:11.001577  317731 retry.go:31] will retry after 356.257687ms: missing components: kube-dns
	I0317 11:19:11.361342  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:11.361371  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:11.361376  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:19:11.361383  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:11.361387  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:19:11.361392  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:19:11.361395  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:19:11.361399  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:19:11.361402  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:19:11.361415  317731 retry.go:31] will retry after 436.697253ms: missing components: kube-dns
	I0317 11:19:11.802176  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:11.802207  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:11.802212  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:19:11.802221  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:11.802225  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:19:11.802230  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:19:11.802233  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:19:11.802236  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:19:11.802239  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:19:11.802251  317731 retry.go:31] will retry after 603.87462ms: missing components: kube-dns
	I0317 11:19:12.410084  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:12.410114  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:12.410119  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:19:12.410126  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:12.410130  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:19:12.410143  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:19:12.410149  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:19:12.410157  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:19:12.410162  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:19:12.410182  317731 retry.go:31] will retry after 857.575048ms: missing components: kube-dns
	I0317 11:19:13.272352  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:13.272386  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:13.272393  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:19:13.272402  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:13.272408  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:19:13.272413  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:19:13.272418  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:19:13.272423  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:19:13.272428  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:19:13.272444  317731 retry.go:31] will retry after 752.669478ms: missing components: kube-dns
	I0317 11:19:14.029026  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:14.029068  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:14.029073  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:19:14.029081  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:14.029085  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:19:14.029089  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:19:14.029093  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:19:14.029096  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:19:14.029099  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:19:14.029114  317731 retry.go:31] will retry after 947.711725ms: missing components: kube-dns
	I0317 11:19:14.981556  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:14.981586  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:14.981591  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:19:14.981599  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:14.981604  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:19:14.981610  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:19:14.981615  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:19:14.981620  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:19:14.981627  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:19:14.981645  317731 retry.go:31] will retry after 1.299336931s: missing components: kube-dns
	I0317 11:19:16.284981  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:16.285013  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:16.285019  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:19:16.285027  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:16.285032  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:19:16.285036  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:19:16.285052  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:19:16.285058  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:19:16.285061  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:19:16.285076  317731 retry.go:31] will retry after 1.742562011s: missing components: kube-dns
	I0317 11:19:18.032359  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:18.032403  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:18.032411  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:19:18.032424  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:18.032431  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:19:18.032443  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:19:18.032463  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:19:18.032473  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:19:18.032483  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:19:18.032506  317731 retry.go:31] will retry after 1.760129772s: missing components: kube-dns
	I0317 11:19:19.798021  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:19.798052  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:19.798057  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:19:19.798072  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:19.798076  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:19:19.798082  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:19:19.798088  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:19:19.798093  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:19:19.798098  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:19:19.798117  317731 retry.go:31] will retry after 3.347585386s: missing components: kube-dns
	I0317 11:19:23.150961  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:23.151038  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:23.151049  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:19:23.151057  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:23.151064  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:19:23.151069  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:19:23.151075  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:19:23.151079  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:19:23.151084  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:19:23.151097  317731 retry.go:31] will retry after 3.293571236s: missing components: kube-dns
	I0317 11:19:26.447999  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:26.448029  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:26.448035  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:19:26.448043  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:26.448048  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:19:26.448052  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:19:26.448056  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:19:26.448059  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:19:26.448062  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:19:26.448077  317731 retry.go:31] will retry after 4.145747173s: missing components: kube-dns
	I0317 11:19:30.598595  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:30.598627  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:30.598633  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:19:30.598640  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:30.598644  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:19:30.598650  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:19:30.598655  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:19:30.598660  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:19:30.598667  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:19:30.598688  317731 retry.go:31] will retry after 4.576796041s: missing components: kube-dns
	I0317 11:19:35.182896  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:35.182928  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:35.182934  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:19:35.182941  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:35.182946  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:19:35.182954  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:19:35.182959  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:19:35.182965  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:19:35.182972  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:19:35.183002  317731 retry.go:31] will retry after 7.687040815s: missing components: kube-dns
	I0317 11:19:42.875646  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:42.875679  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:42.875684  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:19:42.875693  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:42.875697  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:19:42.875702  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:19:42.875705  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:19:42.875708  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:19:42.875711  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:19:42.875723  317731 retry.go:31] will retry after 7.578260506s: missing components: kube-dns
	I0317 11:19:50.459583  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:50.459625  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:50.459630  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:19:50.459640  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:50.459648  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:19:50.459655  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:19:50.459660  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:19:50.459665  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:19:50.459672  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:19:50.459704  317731 retry.go:31] will retry after 13.764631873s: missing components: kube-dns
	I0317 11:20:04.231864  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:04.231904  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:04.231912  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:20:04.231925  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:04.231932  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:20:04.231939  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:20:04.231959  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:20:04.231968  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:20:04.231974  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:20:04.231992  317731 retry.go:31] will retry after 11.514658697s: missing components: kube-dns
	I0317 11:20:15.751647  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:15.751680  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:15.751688  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:20:15.751699  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:15.751704  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:20:15.751711  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:20:15.751716  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:20:15.751722  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:20:15.751728  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:20:15.751750  317731 retry.go:31] will retry after 15.481083164s: missing components: kube-dns
	I0317 11:20:31.236896  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:31.236935  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:31.236944  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:20:31.236959  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:31.236964  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:20:31.236971  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:20:31.236976  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:20:31.236984  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:20:31.236990  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:20:31.237009  317731 retry.go:31] will retry after 19.261545466s: missing components: kube-dns
	I0317 11:20:50.502591  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:50.502629  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:50.502636  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:20:50.502647  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:50.502652  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:20:50.502658  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:20:50.502664  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:20:50.502670  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:20:50.502676  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:20:50.502696  317731 retry.go:31] will retry after 27.654906766s: missing components: kube-dns
	I0317 11:21:18.162882  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:21:18.162924  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:21:18.162931  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:21:18.162943  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:21:18.162950  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:21:18.162957  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:21:18.162963  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:21:18.162969  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:21:18.162978  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:21:18.162995  317731 retry.go:31] will retry after 25.805377541s: missing components: kube-dns
	I0317 11:21:43.975001  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:21:43.975039  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:21:43.975046  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:21:43.975057  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:21:43.975063  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:21:43.975070  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:21:43.975075  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:21:43.975082  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:21:43.975087  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:21:43.975105  317731 retry.go:31] will retry after 50.299309092s: missing components: kube-dns
	I0317 11:22:34.281779  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:22:34.281815  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:22:34.281822  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:22:34.281830  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:22:34.281834  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:22:34.281840  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:22:34.281844  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:22:34.281848  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:22:34.281851  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:22:34.281866  317731 retry.go:31] will retry after 1m2.657088736s: missing components: kube-dns
	I0317 11:23:36.943443  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:23:36.943481  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:23:36.943487  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:23:36.943497  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:23:36.943503  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:23:36.943509  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:23:36.943512  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:23:36.943516  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:23:36.943520  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:23:36.943538  317731 retry.go:31] will retry after 53.125754107s: missing components: kube-dns
	I0317 11:24:30.074980  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:30.075015  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:30.075021  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:24:30.075028  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:30.075032  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:24:30.075036  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:24:30.075040  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:24:30.075046  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:24:30.075049  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:24:30.077099  317731 out.go:201] 
	W0317 11:24:30.078365  317731 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W0317 11:24:30.078387  317731 out.go:270] * 
	* 
	W0317 11:24:30.079214  317731 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0317 11:24:30.080684  317731 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-702762 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-702762
helpers_test.go:235: (dbg) docker inspect old-k8s-version-702762:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f032450d7b40a77ac950e8be0443a18a7fcbd051351121157ab2db0b9fc6d877",
	        "Created": "2025-03-17T11:13:51.516716811Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 318285,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-03-17T11:13:51.547149472Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/f032450d7b40a77ac950e8be0443a18a7fcbd051351121157ab2db0b9fc6d877/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f032450d7b40a77ac950e8be0443a18a7fcbd051351121157ab2db0b9fc6d877/hostname",
	        "HostsPath": "/var/lib/docker/containers/f032450d7b40a77ac950e8be0443a18a7fcbd051351121157ab2db0b9fc6d877/hosts",
	        "LogPath": "/var/lib/docker/containers/f032450d7b40a77ac950e8be0443a18a7fcbd051351121157ab2db0b9fc6d877/f032450d7b40a77ac950e8be0443a18a7fcbd051351121157ab2db0b9fc6d877-json.log",
	        "Name": "/old-k8s-version-702762",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-702762:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-702762",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f032450d7b40a77ac950e8be0443a18a7fcbd051351121157ab2db0b9fc6d877",
	                "LowerDir": "/var/lib/docker/overlay2/30d6ff3d8dd349eb0ddb99703d86296cb2fd0bbcd04baa1853731b1f0107749b-init/diff:/var/lib/docker/overlay2/c513cb32e4b42c4b2e1258d7197e5cd39dcbb3306943490e9747416948e6aaf6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30d6ff3d8dd349eb0ddb99703d86296cb2fd0bbcd04baa1853731b1f0107749b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30d6ff3d8dd349eb0ddb99703d86296cb2fd0bbcd04baa1853731b1f0107749b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30d6ff3d8dd349eb0ddb99703d86296cb2fd0bbcd04baa1853731b1f0107749b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-702762",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-702762/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-702762",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-702762",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-702762",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6d63cc1221922c2f67ed521e2e78ed2e8e3e5d386a26876023a7c5f6bb4604d6",
	            "SandboxKey": "/var/run/docker/netns/6d63cc122192",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-702762": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:bd:54:49:f8:b7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ea0054525d5e0ff5046ee8111ec9a938cf34ac683fd20b7b6a3476707aac0dc8",
	                    "EndpointID": "412303d7406ffc20e6ec06d31cea579c83e2d33b2bff4ecdc20ae1090eedb7c9",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-702762",
	                        "f032450d7b40"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-702762 -n old-k8s-version-702762
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-702762 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-702762 logs -n 25: (1.032916571s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:19 UTC | 17 Mar 25 11:19 UTC |
	|         | systemctl status kubelet --all                       |                              |         |         |                     |                     |
	|         | --full --no-pager                                    |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:19 UTC | 17 Mar 25 11:19 UTC |
	|         | systemctl cat kubelet                                |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:19 UTC | 17 Mar 25 11:19 UTC |
	|         | journalctl -xeu kubelet --all                        |                              |         |         |                     |                     |
	|         | --full --no-pager                                    |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo cat                           | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo cat                           | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC |                     |
	|         | systemctl status docker --all                        |                              |         |         |                     |                     |
	|         | --full --no-pager                                    |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | systemctl cat docker                                 |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo cat                           | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC |                     |
	|         | /etc/docker/daemon.json                              |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo docker                        | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC |                     |
	|         | system info                                          |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC |                     |
	|         | systemctl status cri-docker                          |                              |         |         |                     |                     |
	|         | --all --full --no-pager                              |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | systemctl cat cri-docker                             |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo cat                           | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo cat                           | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | cri-dockerd --version                                |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | systemctl status containerd                          |                              |         |         |                     |                     |
	|         | --all --full --no-pager                              |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | systemctl cat containerd                             |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo cat                           | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | /lib/systemd/system/containerd.service               |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo cat                           | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | /etc/containerd/config.toml                          |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | containerd config dump                               |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC |                     |
	|         | systemctl status crio --all                          |                              |         |         |                     |                     |
	|         | --full --no-pager                                    |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | systemctl cat crio --no-pager                        |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo find                          | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo crio                          | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | config                                               |                              |         |         |                     |                     |
	| delete  | -p kindnet-236437                                    | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	| start   | -p                                                   | default-k8s-diff-port-627203 | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC |                     |
	|         | default-k8s-diff-port-627203                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                       |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                         |                              |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 11:20:09
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 11:20:09.951775  341496 out.go:345] Setting OutFile to fd 1 ...
	I0317 11:20:09.951911  341496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:20:09.951918  341496 out.go:358] Setting ErrFile to fd 2...
	I0317 11:20:09.951924  341496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:20:09.952147  341496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
	I0317 11:20:09.952741  341496 out.go:352] Setting JSON to false
	I0317 11:20:09.954025  341496 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3703,"bootTime":1742206707,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 11:20:09.954091  341496 start.go:139] virtualization: kvm guest
	I0317 11:20:09.956439  341496 out.go:177] * [default-k8s-diff-port-627203] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 11:20:09.957897  341496 out.go:177]   - MINIKUBE_LOCATION=20535
	I0317 11:20:09.957990  341496 notify.go:220] Checking for updates...
	I0317 11:20:09.960721  341496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 11:20:09.962333  341496 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 11:20:09.963810  341496 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	I0317 11:20:09.965290  341496 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 11:20:09.966759  341496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 11:20:09.968637  341496 config.go:182] Loaded profile config "calico-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:20:09.968800  341496 config.go:182] Loaded profile config "no-preload-189670": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:20:09.968922  341496 config.go:182] Loaded profile config "old-k8s-version-702762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0317 11:20:09.969134  341496 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 11:20:09.994726  341496 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 11:20:09.994957  341496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 11:20:10.047464  341496 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-03-17 11:20:10.037717036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 11:20:10.047559  341496 docker.go:318] overlay module found
	I0317 11:20:10.049461  341496 out.go:177] * Using the docker driver based on user configuration
	I0317 11:20:10.050764  341496 start.go:297] selected driver: docker
	I0317 11:20:10.050780  341496 start.go:901] validating driver "docker" against <nil>
	I0317 11:20:10.050795  341496 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 11:20:10.051718  341496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 11:20:10.105955  341496 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-03-17 11:20:10.096342154 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 11:20:10.106128  341496 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 11:20:10.106353  341496 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 11:20:10.108473  341496 out.go:177] * Using Docker driver with root privileges
	I0317 11:20:10.109937  341496 cni.go:84] Creating CNI manager for ""
	I0317 11:20:10.110100  341496 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 11:20:10.110117  341496 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0317 11:20:10.110220  341496 start.go:340] cluster config:
	{Name:default-k8s-diff-port-627203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-627203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:20:10.111829  341496 out.go:177] * Starting "default-k8s-diff-port-627203" primary control-plane node in "default-k8s-diff-port-627203" cluster
	I0317 11:20:10.113031  341496 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0317 11:20:10.114478  341496 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0317 11:20:10.115992  341496 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:20:10.116043  341496 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0317 11:20:10.116053  341496 cache.go:56] Caching tarball of preloaded images
	I0317 11:20:10.116120  341496 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0317 11:20:10.116149  341496 preload.go:172] Found /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 11:20:10.116162  341496 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0317 11:20:10.116325  341496 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/config.json ...
	I0317 11:20:10.116351  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/config.json: {Name:mk848192ef1b40ae1077b4c3a36047479a0034b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:10.138687  341496 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0317 11:20:10.138707  341496 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0317 11:20:10.138729  341496 cache.go:230] Successfully downloaded all kic artifacts
	I0317 11:20:10.138768  341496 start.go:360] acquireMachinesLock for default-k8s-diff-port-627203: {Name:mkcbff1d84866f612a979fbe06c726407300b170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 11:20:10.138896  341496 start.go:364] duration metric: took 104.168µs to acquireMachinesLock for "default-k8s-diff-port-627203"
	I0317 11:20:10.138925  341496 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-627203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-627203 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 11:20:10.139000  341496 start.go:125] createHost starting for "" (driver="docker")
	I0317 11:20:10.141230  341496 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0317 11:20:10.141482  341496 start.go:159] libmachine.API.Create for "default-k8s-diff-port-627203" (driver="docker")
	I0317 11:20:10.141513  341496 client.go:168] LocalClient.Create starting
	I0317 11:20:10.141581  341496 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem
	I0317 11:20:10.141611  341496 main.go:141] libmachine: Decoding PEM data...
	I0317 11:20:10.141625  341496 main.go:141] libmachine: Parsing certificate...
	I0317 11:20:10.141678  341496 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem
	I0317 11:20:10.141696  341496 main.go:141] libmachine: Decoding PEM data...
	I0317 11:20:10.141706  341496 main.go:141] libmachine: Parsing certificate...
	I0317 11:20:10.142029  341496 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-627203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0317 11:20:10.160384  341496 cli_runner.go:211] docker network inspect default-k8s-diff-port-627203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0317 11:20:10.160474  341496 network_create.go:284] running [docker network inspect default-k8s-diff-port-627203] to gather additional debugging logs...
	I0317 11:20:10.160501  341496 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-627203
	W0317 11:20:10.178195  341496 cli_runner.go:211] docker network inspect default-k8s-diff-port-627203 returned with exit code 1
	I0317 11:20:10.178227  341496 network_create.go:287] error running [docker network inspect default-k8s-diff-port-627203]: docker network inspect default-k8s-diff-port-627203: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-627203 not found
	I0317 11:20:10.178241  341496 network_create.go:289] output of [docker network inspect default-k8s-diff-port-627203]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-627203 not found
	
	** /stderr **
	I0317 11:20:10.178338  341496 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 11:20:10.197679  341496 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6a2ef9d4bc68 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9a:4d:91:26:57:2c} reservation:<nil>}
	I0317 11:20:10.198624  341496 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-00bf62ef0133 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:c5:34:86:d6:21} reservation:<nil>}
	I0317 11:20:10.199639  341496 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-81e0001ceae7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:6a:cf:1c:79:e6} reservation:<nil>}
	I0317 11:20:10.200718  341496 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d24500}
	I0317 11:20:10.200739  341496 network_create.go:124] attempt to create docker network default-k8s-diff-port-627203 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0317 11:20:10.200784  341496 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-627203 default-k8s-diff-port-627203
	I0317 11:20:10.255439  341496 network_create.go:108] docker network default-k8s-diff-port-627203 192.168.76.0/24 created
	I0317 11:20:10.255568  341496 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-627203" container
	I0317 11:20:10.255629  341496 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0317 11:20:10.274724  341496 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-627203 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-627203 --label created_by.minikube.sigs.k8s.io=true
	I0317 11:20:10.294680  341496 oci.go:103] Successfully created a docker volume default-k8s-diff-port-627203
	I0317 11:20:10.294772  341496 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-627203-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-627203 --entrypoint /usr/bin/test -v default-k8s-diff-port-627203:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0317 11:20:10.747828  341496 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-627203
	I0317 11:20:10.747877  341496 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:20:10.747900  341496 kic.go:194] Starting extracting preloaded images to volume ...
	I0317 11:20:10.747969  341496 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-627203:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0317 11:20:14.847118  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:14.847156  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:14.847163  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:20:14.847172  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:14.847176  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:20:14.847181  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:20:14.847184  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:20:14.847187  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:20:14.847194  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:20:14.847208  326404 retry.go:31] will retry after 10.791921859s: missing components: kube-dns
	I0317 11:20:15.344266  341496 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-627203:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.596232772s)
	I0317 11:20:15.344302  341496 kic.go:203] duration metric: took 4.596396796s to extract preloaded images to volume ...
	W0317 11:20:15.344459  341496 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0317 11:20:15.344607  341496 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0317 11:20:15.397506  341496 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-627203 --name default-k8s-diff-port-627203 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-627203 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-627203 --network default-k8s-diff-port-627203 --ip 192.168.76.2 --volume default-k8s-diff-port-627203:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0317 11:20:15.665923  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Running}}
	I0317 11:20:15.686899  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:15.706866  341496 cli_runner.go:164] Run: docker exec default-k8s-diff-port-627203 stat /var/lib/dpkg/alternatives/iptables
	I0317 11:20:15.749402  341496 oci.go:144] the created container "default-k8s-diff-port-627203" has a running status.
	I0317 11:20:15.749447  341496 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa...
	I0317 11:20:15.892302  341496 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0317 11:20:15.918468  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:15.941520  341496 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0317 11:20:15.941545  341496 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-627203 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0317 11:20:15.989310  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:16.010066  341496 machine.go:93] provisionDockerMachine start ...
	I0317 11:20:16.010194  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:16.033285  341496 main.go:141] libmachine: Using SSH client type: native
	I0317 11:20:16.033637  341496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0317 11:20:16.033665  341496 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 11:20:16.034656  341496 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46524->127.0.0.1:33103: read: connection reset by peer
	I0317 11:20:19.170824  341496 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-627203
	
	I0317 11:20:19.170859  341496 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-627203"
	I0317 11:20:19.170929  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:19.189150  341496 main.go:141] libmachine: Using SSH client type: native
	I0317 11:20:19.189434  341496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0317 11:20:19.189452  341496 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-627203 && echo "default-k8s-diff-port-627203" | sudo tee /etc/hostname
	I0317 11:20:19.334316  341496 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-627203
	
	I0317 11:20:19.334392  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:19.351482  341496 main.go:141] libmachine: Using SSH client type: native
	I0317 11:20:19.351684  341496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0317 11:20:19.351701  341496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-627203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-627203/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-627203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 11:20:19.483211  341496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 11:20:19.483289  341496 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20535-4918/.minikube CaCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20535-4918/.minikube}
	I0317 11:20:19.483331  341496 ubuntu.go:177] setting up certificates
	I0317 11:20:19.483341  341496 provision.go:84] configureAuth start
	I0317 11:20:19.483396  341496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-627203
	I0317 11:20:19.500645  341496 provision.go:143] copyHostCerts
	I0317 11:20:19.500703  341496 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem, removing ...
	I0317 11:20:19.500713  341496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem
	I0317 11:20:19.500773  341496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem (1082 bytes)
	I0317 11:20:19.500859  341496 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem, removing ...
	I0317 11:20:19.500868  341496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem
	I0317 11:20:19.500892  341496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem (1123 bytes)
	I0317 11:20:19.500946  341496 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem, removing ...
	I0317 11:20:19.500954  341496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem
	I0317 11:20:19.500979  341496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem (1679 bytes)
	I0317 11:20:19.501029  341496 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-627203 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-627203 localhost minikube]
	I0317 11:20:19.577076  341496 provision.go:177] copyRemoteCerts
	I0317 11:20:19.577143  341496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 11:20:19.577187  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:19.594134  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:19.688036  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 11:20:19.710326  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0317 11:20:19.732614  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 11:20:19.753945  341496 provision.go:87] duration metric: took 270.590449ms to configureAuth
	I0317 11:20:19.753968  341496 ubuntu.go:193] setting minikube options for container-runtime
	I0317 11:20:19.754118  341496 config.go:182] Loaded profile config "default-k8s-diff-port-627203": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:20:19.754128  341496 machine.go:96] duration metric: took 3.744035437s to provisionDockerMachine
	I0317 11:20:19.754134  341496 client.go:171] duration metric: took 9.612615756s to LocalClient.Create
	I0317 11:20:19.754154  341496 start.go:167] duration metric: took 9.612671271s to libmachine.API.Create "default-k8s-diff-port-627203"
	I0317 11:20:19.754161  341496 start.go:293] postStartSetup for "default-k8s-diff-port-627203" (driver="docker")
	I0317 11:20:19.754175  341496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 11:20:19.754215  341496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 11:20:19.754250  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:19.771203  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:19.872391  341496 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 11:20:19.875550  341496 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0317 11:20:19.875582  341496 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0317 11:20:19.875595  341496 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0317 11:20:19.875607  341496 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0317 11:20:19.875635  341496 filesync.go:126] Scanning /home/jenkins/minikube-integration/20535-4918/.minikube/addons for local assets ...
	I0317 11:20:19.875698  341496 filesync.go:126] Scanning /home/jenkins/minikube-integration/20535-4918/.minikube/files for local assets ...
	I0317 11:20:19.875804  341496 filesync.go:149] local asset: /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem -> 116902.pem in /etc/ssl/certs
	I0317 11:20:19.875917  341496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 11:20:19.883445  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem --> /etc/ssl/certs/116902.pem (1708 bytes)
	I0317 11:20:19.905732  341496 start.go:296] duration metric: took 151.558516ms for postStartSetup
	I0317 11:20:19.906060  341496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-627203
	I0317 11:20:19.925755  341496 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/config.json ...
	I0317 11:20:19.926020  341496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 11:20:19.926086  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:19.944770  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:15.751647  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:15.751680  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:15.751688  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:20:15.751699  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:15.751704  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:20:15.751711  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:20:15.751716  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:20:15.751722  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:20:15.751728  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:20:15.751750  317731 retry.go:31] will retry after 15.481083164s: missing components: kube-dns
	I0317 11:20:20.036185  341496 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0317 11:20:20.040344  341496 start.go:128] duration metric: took 9.901332366s to createHost
	I0317 11:20:20.040365  341496 start.go:83] releasing machines lock for "default-k8s-diff-port-627203", held for 9.901455126s
	I0317 11:20:20.040424  341496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-627203
	I0317 11:20:20.057945  341496 ssh_runner.go:195] Run: cat /version.json
	I0317 11:20:20.057987  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:20.058044  341496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 11:20:20.058110  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:20.077893  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:20.078299  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:20.248043  341496 ssh_runner.go:195] Run: systemctl --version
	I0317 11:20:20.252422  341496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0317 11:20:20.256698  341496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0317 11:20:20.280151  341496 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0317 11:20:20.280205  341496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 11:20:20.303739  341496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0317 11:20:20.303757  341496 start.go:495] detecting cgroup driver to use...
	I0317 11:20:20.303795  341496 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0317 11:20:20.303871  341496 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 11:20:20.314490  341496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 11:20:20.323921  341496 docker.go:217] disabling cri-docker service (if available) ...
	I0317 11:20:20.323964  341496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 11:20:20.336961  341496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 11:20:20.348981  341496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 11:20:20.427755  341496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 11:20:20.507541  341496 docker.go:233] disabling docker service ...
	I0317 11:20:20.507615  341496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 11:20:20.525433  341496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 11:20:20.536350  341496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 11:20:20.601585  341496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 11:20:20.666739  341496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 11:20:20.677294  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 11:20:20.692169  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 11:20:20.700729  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 11:20:20.709826  341496 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 11:20:20.709888  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 11:20:20.718738  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:20:20.727842  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 11:20:20.736960  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:20:20.745738  341496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 11:20:20.753974  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 11:20:20.762628  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 11:20:20.770887  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 11:20:20.779873  341496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 11:20:20.787306  341496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 11:20:20.794585  341496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:20:20.857244  341496 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 11:20:20.962615  341496 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0317 11:20:20.962696  341496 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0317 11:20:20.966342  341496 start.go:563] Will wait 60s for crictl version
	I0317 11:20:20.966394  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:20:20.969458  341496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 11:20:21.000301  341496 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.25
	RuntimeApiVersion:  v1
	I0317 11:20:21.000364  341496 ssh_runner.go:195] Run: containerd --version
	I0317 11:20:21.021585  341496 ssh_runner.go:195] Run: containerd --version
	I0317 11:20:21.045298  341496 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
	I0317 11:20:21.046823  341496 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-627203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 11:20:21.063998  341496 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0317 11:20:21.067681  341496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:20:21.078036  341496 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-627203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-627203 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 11:20:21.078155  341496 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:20:21.078215  341496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 11:20:21.110394  341496 containerd.go:627] all images are preloaded for containerd runtime.
	I0317 11:20:21.110416  341496 containerd.go:534] Images already preloaded, skipping extraction
	I0317 11:20:21.110471  341496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 11:20:21.147039  341496 containerd.go:627] all images are preloaded for containerd runtime.
	I0317 11:20:21.147059  341496 cache_images.go:84] Images are preloaded, skipping loading
	I0317 11:20:21.147072  341496 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.32.2 containerd true true} ...
	I0317 11:20:21.147182  341496 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-627203 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-627203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 11:20:21.147245  341496 ssh_runner.go:195] Run: sudo crictl info
	I0317 11:20:21.180368  341496 cni.go:84] Creating CNI manager for ""
	I0317 11:20:21.180402  341496 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 11:20:21.180417  341496 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 11:20:21.180451  341496 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-627203 NodeName:default-k8s-diff-port-627203 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 11:20:21.180598  341496 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-627203"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 11:20:21.180676  341496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 11:20:21.189167  341496 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 11:20:21.189222  341496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 11:20:21.197091  341496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0317 11:20:21.212836  341496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 11:20:21.228613  341496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2318 bytes)
	I0317 11:20:21.244235  341496 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0317 11:20:21.247449  341496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:20:21.257029  341496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:20:21.331412  341496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:20:21.344658  341496 certs.go:68] Setting up /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203 for IP: 192.168.76.2
	I0317 11:20:21.344685  341496 certs.go:194] generating shared ca certs ...
	I0317 11:20:21.344706  341496 certs.go:226] acquiring lock for ca certs: {Name:mkf58624c63680e02907d28348d45986283847c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.344852  341496 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20535-4918/.minikube/ca.key
	I0317 11:20:21.344888  341496 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.key
	I0317 11:20:21.344900  341496 certs.go:256] generating profile certs ...
	I0317 11:20:21.344967  341496 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.key
	I0317 11:20:21.344994  341496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.crt with IP's: []
	I0317 11:20:21.433063  341496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.crt ...
	I0317 11:20:21.433090  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.crt: {Name:mk081d27f47a46e83ef42cd529ab90efa4a42374 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.433242  341496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.key ...
	I0317 11:20:21.433256  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.key: {Name:mk3ff3f97f5b6d17c55106167353f358e3be7b97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.433330  341496 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key.0ed8e3f2
	I0317 11:20:21.433345  341496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt.0ed8e3f2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0317 11:20:21.695664  341496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt.0ed8e3f2 ...
	I0317 11:20:21.695695  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt.0ed8e3f2: {Name:mk7442ef755923abf17c70bd38ce4a38e38e6b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.695884  341496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key.0ed8e3f2 ...
	I0317 11:20:21.695904  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key.0ed8e3f2: {Name:mke8376d0935665b80188d48fe43b8e5b8ff6f80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.695977  341496 certs.go:381] copying /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt.0ed8e3f2 -> /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt
	I0317 11:20:21.696069  341496 certs.go:385] copying /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key.0ed8e3f2 -> /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key
	I0317 11:20:21.696166  341496 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.key
	I0317 11:20:21.696189  341496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.crt with IP's: []
	I0317 11:20:21.791034  341496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.crt ...
	I0317 11:20:21.791067  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.crt: {Name:mk96f99fc08821936606db2cdde9f87f27d42fb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.791243  341496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.key ...
	I0317 11:20:21.791284  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.key: {Name:mk0e9ec0c366cd0af025f90a833ba1e60d673556 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.791492  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690.pem (1338 bytes)
	W0317 11:20:21.791525  341496 certs.go:480] ignoring /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690_empty.pem, impossibly tiny 0 bytes
	I0317 11:20:21.791536  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 11:20:21.791559  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem (1082 bytes)
	I0317 11:20:21.791585  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem (1123 bytes)
	I0317 11:20:21.791609  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem (1679 bytes)
	I0317 11:20:21.791644  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem (1708 bytes)
	I0317 11:20:21.792251  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 11:20:21.814842  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 11:20:21.836814  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 11:20:21.860128  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 11:20:21.881562  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0317 11:20:21.903421  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 11:20:21.928625  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 11:20:21.951436  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0317 11:20:21.974719  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690.pem --> /usr/share/ca-certificates/11690.pem (1338 bytes)
	I0317 11:20:21.998103  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem --> /usr/share/ca-certificates/116902.pem (1708 bytes)
	I0317 11:20:22.019954  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 11:20:22.042505  341496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 11:20:22.058914  341496 ssh_runner.go:195] Run: openssl version
	I0317 11:20:22.064354  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116902.pem && ln -fs /usr/share/ca-certificates/116902.pem /etc/ssl/certs/116902.pem"
	I0317 11:20:22.073425  341496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116902.pem
	I0317 11:20:22.076909  341496 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:32 /usr/share/ca-certificates/116902.pem
	I0317 11:20:22.076964  341496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116902.pem
	I0317 11:20:22.084480  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116902.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 11:20:22.094200  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 11:20:22.103020  341496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:20:22.106304  341496 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:20:22.106414  341496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:20:22.112757  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 11:20:22.121663  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11690.pem && ln -fs /usr/share/ca-certificates/11690.pem /etc/ssl/certs/11690.pem"
	I0317 11:20:22.130150  341496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11690.pem
	I0317 11:20:22.133632  341496 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:32 /usr/share/ca-certificates/11690.pem
	I0317 11:20:22.133685  341496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11690.pem
	I0317 11:20:22.140348  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11690.pem /etc/ssl/certs/51391683.0"
	I0317 11:20:22.148875  341496 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 11:20:22.151896  341496 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 11:20:22.151951  341496 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-627203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-627203 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:20:22.152020  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0317 11:20:22.152054  341496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 11:20:22.184980  341496 cri.go:89] found id: ""
	I0317 11:20:22.185043  341496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 11:20:22.193505  341496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 11:20:22.201849  341496 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0317 11:20:22.201930  341496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 11:20:22.210091  341496 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 11:20:22.210113  341496 kubeadm.go:157] found existing configuration files:
	
	I0317 11:20:22.210163  341496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0317 11:20:22.218192  341496 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 11:20:22.218255  341496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 11:20:22.226657  341496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0317 11:20:22.239638  341496 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 11:20:22.239694  341496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 11:20:22.247616  341496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0317 11:20:22.256388  341496 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 11:20:22.256448  341496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 11:20:22.264706  341496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0317 11:20:22.272518  341496 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 11:20:22.272585  341496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 11:20:22.281056  341496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0317 11:20:22.333597  341496 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0317 11:20:22.333966  341496 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0317 11:20:22.389918  341496 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 11:20:25.643642  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:25.643677  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:25.643687  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:20:25.643701  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:25.643706  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:20:25.643713  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:20:25.643718  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:20:25.643723  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:20:25.643727  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:20:25.643744  326404 retry.go:31] will retry after 15.233092286s: missing components: kube-dns
	I0317 11:20:31.555534  341496 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 11:20:31.555624  341496 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 11:20:31.555753  341496 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0317 11:20:31.555806  341496 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0317 11:20:31.555879  341496 kubeadm.go:310] OS: Linux
	I0317 11:20:31.555963  341496 kubeadm.go:310] CGROUPS_CPU: enabled
	I0317 11:20:31.556040  341496 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0317 11:20:31.556116  341496 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0317 11:20:31.556186  341496 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0317 11:20:31.556263  341496 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0317 11:20:31.556356  341496 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0317 11:20:31.556406  341496 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0317 11:20:31.556449  341496 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0317 11:20:31.556490  341496 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0317 11:20:31.556550  341496 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 11:20:31.556678  341496 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 11:20:31.556827  341496 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 11:20:31.556924  341496 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 11:20:31.558772  341496 out.go:235]   - Generating certificates and keys ...
	I0317 11:20:31.558886  341496 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 11:20:31.558955  341496 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 11:20:31.559017  341496 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 11:20:31.559068  341496 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 11:20:31.559146  341496 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 11:20:31.559215  341496 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 11:20:31.559342  341496 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 11:20:31.559507  341496 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-627203 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0317 11:20:31.559566  341496 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 11:20:31.559687  341496 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-627203 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0317 11:20:31.559743  341496 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 11:20:31.559836  341496 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 11:20:31.559913  341496 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 11:20:31.560004  341496 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 11:20:31.560089  341496 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 11:20:31.560182  341496 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 11:20:31.560271  341496 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 11:20:31.560363  341496 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 11:20:31.560437  341496 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 11:20:31.560547  341496 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 11:20:31.560619  341496 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 11:20:31.561976  341496 out.go:235]   - Booting up control plane ...
	I0317 11:20:31.562075  341496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 11:20:31.562146  341496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 11:20:31.562203  341496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 11:20:31.562291  341496 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 11:20:31.562370  341496 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 11:20:31.562404  341496 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 11:20:31.562526  341496 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 11:20:31.562631  341496 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 11:20:31.562686  341496 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.585498ms
	I0317 11:20:31.562756  341496 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 11:20:31.562810  341496 kubeadm.go:310] [api-check] The API server is healthy after 5.001640951s
	I0317 11:20:31.562926  341496 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 11:20:31.563043  341496 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 11:20:31.563096  341496 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 11:20:31.563308  341496 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-627203 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 11:20:31.563370  341496 kubeadm.go:310] [bootstrap-token] Using token: cynw4v.vidupn9uwbpkry9q
	I0317 11:20:31.565344  341496 out.go:235]   - Configuring RBAC rules ...
	I0317 11:20:31.565438  341496 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 11:20:31.565516  341496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 11:20:31.565649  341496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 11:20:31.565854  341496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 11:20:31.565999  341496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 11:20:31.566087  341496 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 11:20:31.566197  341496 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 11:20:31.566250  341496 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 11:20:31.566293  341496 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 11:20:31.566298  341496 kubeadm.go:310] 
	I0317 11:20:31.566370  341496 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 11:20:31.566379  341496 kubeadm.go:310] 
	I0317 11:20:31.566477  341496 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 11:20:31.566484  341496 kubeadm.go:310] 
	I0317 11:20:31.566505  341496 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 11:20:31.566555  341496 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 11:20:31.566599  341496 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 11:20:31.566605  341496 kubeadm.go:310] 
	I0317 11:20:31.566649  341496 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 11:20:31.566655  341496 kubeadm.go:310] 
	I0317 11:20:31.566724  341496 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 11:20:31.566735  341496 kubeadm.go:310] 
	I0317 11:20:31.566814  341496 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 11:20:31.566915  341496 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 11:20:31.567023  341496 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 11:20:31.567040  341496 kubeadm.go:310] 
	I0317 11:20:31.567157  341496 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 11:20:31.567285  341496 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 11:20:31.567299  341496 kubeadm.go:310] 
	I0317 11:20:31.567400  341496 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cynw4v.vidupn9uwbpkry9q \
	I0317 11:20:31.567505  341496 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fbbd8e832ea7aa08371d4fcc88b71c8e29c98bed7a9a4feed9bf5043f7b52578 \
	I0317 11:20:31.567540  341496 kubeadm.go:310] 	--control-plane 
	I0317 11:20:31.567550  341496 kubeadm.go:310] 
	I0317 11:20:31.567675  341496 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 11:20:31.567685  341496 kubeadm.go:310] 
	I0317 11:20:31.567820  341496 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cynw4v.vidupn9uwbpkry9q \
	I0317 11:20:31.567990  341496 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fbbd8e832ea7aa08371d4fcc88b71c8e29c98bed7a9a4feed9bf5043f7b52578 
	I0317 11:20:31.568005  341496 cni.go:84] Creating CNI manager for ""
	I0317 11:20:31.568014  341496 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 11:20:31.570308  341496 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0317 11:20:31.571654  341496 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0317 11:20:31.575330  341496 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0317 11:20:31.575346  341496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0317 11:20:31.592203  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0317 11:20:31.796107  341496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 11:20:31.796185  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:31.796227  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-627203 minikube.k8s.io/updated_at=2025_03_17T11_20_31_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76 minikube.k8s.io/name=default-k8s-diff-port-627203 minikube.k8s.io/primary=true
	I0317 11:20:31.913761  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:31.913762  341496 ops.go:34] apiserver oom_adj: -16
	I0317 11:20:32.414495  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:32.914861  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:33.414784  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:33.914144  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:34.414705  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:34.913915  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:35.414122  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:35.487512  341496 kubeadm.go:1113] duration metric: took 3.691382531s to wait for elevateKubeSystemPrivileges
	I0317 11:20:35.487556  341496 kubeadm.go:394] duration metric: took 13.335608972s to StartCluster
	I0317 11:20:35.487576  341496 settings.go:142] acquiring lock: {Name:mk2a57d556efff40ccd4336229d7a78216b861f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:35.487640  341496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 11:20:35.489566  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/kubeconfig: {Name:mk686b9f6159ab958672b945ae0aa5a9c96e9ecc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:35.489774  341496 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 11:20:35.489881  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 11:20:35.489943  341496 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 11:20:35.490029  341496 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-627203"
	I0317 11:20:35.490056  341496 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-627203"
	I0317 11:20:35.490076  341496 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-627203"
	I0317 11:20:35.490078  341496 config.go:182] Loaded profile config "default-k8s-diff-port-627203": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:20:35.490098  341496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-627203"
	I0317 11:20:35.490113  341496 host.go:66] Checking if "default-k8s-diff-port-627203" exists ...
	I0317 11:20:35.490455  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:35.490636  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:35.491384  341496 out.go:177] * Verifying Kubernetes components...
	I0317 11:20:35.492644  341496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:20:35.518758  341496 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-627203"
	I0317 11:20:35.518803  341496 host.go:66] Checking if "default-k8s-diff-port-627203" exists ...
	I0317 11:20:35.519164  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:35.520182  341496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:20:31.236896  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:31.236935  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:31.236944  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:20:31.236959  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:31.236964  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:20:31.236971  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:20:31.236976  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:20:31.236984  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:20:31.236990  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:20:31.237009  317731 retry.go:31] will retry after 19.261545466s: missing components: kube-dns
	I0317 11:20:35.521412  341496 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 11:20:35.521431  341496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 11:20:35.521480  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:35.546610  341496 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 11:20:35.546635  341496 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 11:20:35.546679  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:35.549777  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:35.572702  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:35.624663  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 11:20:35.637144  341496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:20:35.724225  341496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 11:20:35.825754  341496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 11:20:36.141080  341496 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0317 11:20:36.142459  341496 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-627203" to be "Ready" ...
	I0317 11:20:36.207177  341496 node_ready.go:49] node "default-k8s-diff-port-627203" has status "Ready":"True"
	I0317 11:20:36.207215  341496 node_ready.go:38] duration metric: took 64.732247ms for node "default-k8s-diff-port-627203" to be "Ready" ...
	I0317 11:20:36.207231  341496 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:20:36.211865  341496 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace to be "Ready" ...
	I0317 11:20:36.619880  341496 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0317 11:20:36.621467  341496 addons.go:514] duration metric: took 1.131519409s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0317 11:20:36.646479  341496 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-627203" context rescaled to 1 replicas
	I0317 11:20:38.217170  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:40.881134  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:40.881166  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:40.881172  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:20:40.881180  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:40.881183  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:20:40.881187  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:20:40.881190  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:20:40.881194  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:20:40.881197  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:20:40.881210  326404 retry.go:31] will retry after 23.951072137s: missing components: kube-dns
	I0317 11:20:40.524557  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:20:40.524600  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:20:40.524614  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:20:40.524624  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:40.524632  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:20:40.524640  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:20:40.524649  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:20:40.524658  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:20:40.524664  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:20:40.524673  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:20:40.524693  271403 retry.go:31] will retry after 1m5.301611864s: missing components: kube-dns
	I0317 11:20:40.217729  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:42.716852  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:44.717026  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:46.717095  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:49.217150  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:50.502591  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:50.502629  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:50.502636  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:20:50.502647  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:50.502652  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:20:50.502658  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:20:50.502664  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:20:50.502670  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:20:50.502676  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:20:50.502696  317731 retry.go:31] will retry after 27.654906766s: missing components: kube-dns
	I0317 11:20:51.716947  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:54.217035  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:56.217405  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:58.716755  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:00.717212  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:03.216840  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:04.837935  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:21:04.837975  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:21:04.837986  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:21:04.837998  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:21:04.838004  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:21:04.838010  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:21:04.838016  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:21:04.838020  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:21:04.838025  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:21:04.838044  326404 retry.go:31] will retry after 29.604408571s: missing components: kube-dns
	I0317 11:21:05.716737  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:07.717290  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:10.216367  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:12.217359  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:14.717254  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:17.216553  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:19.216868  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:18.162882  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:21:18.162924  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:21:18.162931  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:21:18.162943  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:21:18.162950  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:21:18.162957  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:21:18.162963  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:21:18.162969  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:21:18.162978  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:21:18.162995  317731 retry.go:31] will retry after 25.805377541s: missing components: kube-dns
	I0317 11:21:21.717204  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:23.717446  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:26.217593  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:28.716779  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:30.716838  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:32.717482  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:34.717607  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:34.446564  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:21:34.446602  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:21:34.446609  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:21:34.446620  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:21:34.446625  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:21:34.446633  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:21:34.446637  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:21:34.446644  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:21:34.446649  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:21:34.446672  326404 retry.go:31] will retry after 39.340349632s: missing components: kube-dns
	I0317 11:21:37.217012  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:39.720107  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:42.217009  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:44.717014  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:43.975001  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:21:43.975039  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:21:43.975046  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:21:43.975057  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:21:43.975063  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:21:43.975070  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:21:43.975075  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:21:43.975082  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:21:43.975087  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:21:43.975105  317731 retry.go:31] will retry after 50.299309092s: missing components: kube-dns
	I0317 11:21:45.830506  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:21:45.830550  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:21:45.830565  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:21:45.830575  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:21:45.830582  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:21:45.830589  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:21:45.830596  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:21:45.830602  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:21:45.830612  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:21:45.830619  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:21:45.830639  271403 retry.go:31] will retry after 1m6.469274108s: missing components: kube-dns
	I0317 11:21:47.216852  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:49.716980  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:51.717159  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:53.717199  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:56.216966  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:58.716666  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:00.716842  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:03.216854  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:05.716421  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:07.717473  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:09.717607  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:12.216801  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:14.217528  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:13.791135  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:22:13.791174  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:22:13.791182  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:22:13.791189  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:22:13.791193  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:22:13.791198  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:22:13.791201  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:22:13.791204  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:22:13.791207  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:22:13.791221  326404 retry.go:31] will retry after 37.076286109s: missing components: kube-dns
	I0317 11:22:16.716908  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:18.717190  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:21.216745  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:23.717172  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:25.717597  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:28.216363  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:30.216624  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:32.216877  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:34.716824  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:34.281779  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:22:34.281815  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:22:34.281822  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:22:34.281830  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:22:34.281834  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:22:34.281840  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:22:34.281844  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:22:34.281848  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:22:34.281851  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:22:34.281866  317731 retry.go:31] will retry after 1m2.657088736s: missing components: kube-dns
	I0317 11:22:37.217665  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:39.716973  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:41.717247  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:44.216939  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:46.716529  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:48.716994  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:50.872276  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:22:50.872306  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:22:50.872312  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:22:50.872319  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:22:50.872323  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:22:50.872329  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:22:50.872332  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:22:50.872336  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:22:50.872339  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:22:50.872352  326404 retry.go:31] will retry after 59.664508979s: missing components: kube-dns
	I0317 11:22:52.304439  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:22:52.304483  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:22:52.304503  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:22:52.304514  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:22:52.304522  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:22:52.304529  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:22:52.304538  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:22:52.304546  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:22:52.304553  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:22:52.304559  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:22:52.304577  271403 retry.go:31] will retry after 57.75468648s: missing components: kube-dns
	I0317 11:22:51.216816  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:53.216970  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:55.716609  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:57.717480  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:00.217407  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:02.716365  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:04.716438  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:06.716987  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:09.216843  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:11.217200  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:13.218595  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:15.717196  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:18.216354  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:20.217860  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:22.716518  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:24.717213  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:27.216933  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:29.717016  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:32.216483  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:34.217018  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:36.716769  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:38.717020  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:36.943443  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:23:36.943481  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:23:36.943487  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:23:36.943497  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:23:36.943503  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:23:36.943509  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:23:36.943512  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:23:36.943516  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:23:36.943520  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:23:36.943538  317731 retry.go:31] will retry after 53.125754107s: missing components: kube-dns
	I0317 11:23:40.717051  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:42.717588  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:45.216717  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:47.717009  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:49.718582  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:50.542962  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:23:50.543000  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:23:50.543007  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:23:50.543017  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:23:50.543021  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:23:50.543027  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:23:50.543030  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:23:50.543034  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:23:50.543037  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:23:50.543062  326404 retry.go:31] will retry after 54.915772165s: missing components: kube-dns
	I0317 11:23:50.063088  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:23:50.063127  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:23:50.063136  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:23:50.063153  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:23:50.063159  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:23:50.063166  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:23:50.063169  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:23:50.063174  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:23:50.063177  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:23:50.063180  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:23:50.063197  271403 retry.go:31] will retry after 47.200040689s: missing components: kube-dns
	I0317 11:23:52.216980  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:54.217886  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:56.717131  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:59.217483  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:01.717240  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:04.216952  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:06.217363  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:08.717047  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:11.216816  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:13.217215  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:15.217429  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:17.717023  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:20.216953  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:22.216989  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:24.716953  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:27.217304  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:29.717972  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:30.074980  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:30.075015  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:30.075021  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:24:30.075028  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:30.075032  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:24:30.075036  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:24:30.075040  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:24:30.075046  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:24:30.075049  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:24:30.077099  317731 out.go:201] 
	W0317 11:24:30.078365  317731 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W0317 11:24:30.078387  317731 out.go:270] * 
	W0317 11:24:30.079214  317731 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0317 11:24:30.080684  317731 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2cfaeb9439e0d       6e38f40d628db       9 minutes ago       Running             storage-provisioner       0                   a422f8f9977d1       storage-provisioner
	66cb4bfe01314       10cc881966cfd       9 minutes ago       Running             kube-proxy                0                   3b1c00e51d55d       kube-proxy-l5hsd
	47066ea1751e9       ca9843d3b5454       10 minutes ago      Running             kube-apiserver            0                   d563889f3fad2       kube-apiserver-old-k8s-version-702762
	dfb07a4e8ade4       0369cf4303ffd       10 minutes ago      Running             etcd                      0                   878e06ee8ae0a       etcd-old-k8s-version-702762
	cfbb7f23faf71       3138b6e3d4712       10 minutes ago      Running             kube-scheduler            0                   1ac550f332a00       kube-scheduler-old-k8s-version-702762
	344bb10a5d426       b9fa1895dcaa6       10 minutes ago      Running             kube-controller-manager   0                   1a5c415e45274       kube-controller-manager-old-k8s-version-702762
	
	
	==> containerd <==
	Mar 17 11:21:49 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:21:49.560798297Z" level=error msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\" failed, error" error="failed to setup network for sandbox \"8a4f60d2f8b4c24dd4e90b3b91ff9480f232171bd96025e99ba04c45e0a5077c\": failed to find network info for sandbox \"8a4f60d2f8b4c24dd4e90b3b91ff9480f232171bd96025e99ba04c45e0a5077c\""
	Mar 17 11:22:04 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:22:04.542256372Z" level=info msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\""
	Mar 17 11:22:04 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:22:04.560171131Z" level=error msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\" failed, error" error="failed to setup network for sandbox \"90d52afe06bd7dd06226c87be28f9ba2862a78a8d02eac2ffb3e962156d187da\": failed to find network info for sandbox \"90d52afe06bd7dd06226c87be28f9ba2862a78a8d02eac2ffb3e962156d187da\""
	Mar 17 11:22:16 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:22:16.542146121Z" level=info msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\""
	Mar 17 11:22:16 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:22:16.560196737Z" level=error msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\" failed, error" error="failed to setup network for sandbox \"50e4cd1af4ba8dbf09807291770176f501703232ec7fd75e8f22d071e10b6c0a\": failed to find network info for sandbox \"50e4cd1af4ba8dbf09807291770176f501703232ec7fd75e8f22d071e10b6c0a\""
	Mar 17 11:22:29 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:22:29.542248866Z" level=info msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\""
	Mar 17 11:22:29 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:22:29.560923257Z" level=error msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\" failed, error" error="failed to setup network for sandbox \"e81c67250bacf56c33c3377688216c0b7b66996727f5b68199c542bc94458bcc\": failed to find network info for sandbox \"e81c67250bacf56c33c3377688216c0b7b66996727f5b68199c542bc94458bcc\""
	Mar 17 11:22:43 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:22:43.542157185Z" level=info msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\""
	Mar 17 11:22:43 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:22:43.561605509Z" level=error msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\" failed, error" error="failed to setup network for sandbox \"7477b894cebe3facfc67c3e359f153aba1c75b9413b1e083d573dea186c6ae29\": failed to find network info for sandbox \"7477b894cebe3facfc67c3e359f153aba1c75b9413b1e083d573dea186c6ae29\""
	Mar 17 11:22:54 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:22:54.542149315Z" level=info msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\""
	Mar 17 11:22:54 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:22:54.562032452Z" level=error msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\" failed, error" error="failed to setup network for sandbox \"66a238b813bd746c2079320c9a484a9ea741c2260ccdefb5038aaeec4a9df4dc\": failed to find network info for sandbox \"66a238b813bd746c2079320c9a484a9ea741c2260ccdefb5038aaeec4a9df4dc\""
	Mar 17 11:23:07 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:23:07.542160907Z" level=info msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\""
	Mar 17 11:23:07 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:23:07.561115327Z" level=error msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\" failed, error" error="failed to setup network for sandbox \"3474e589600713cbb28b72b77c80fcf7f17ac45a0eac2fb630c3d94aac26c041\": failed to find network info for sandbox \"3474e589600713cbb28b72b77c80fcf7f17ac45a0eac2fb630c3d94aac26c041\""
	Mar 17 11:23:22 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:23:22.542126994Z" level=info msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\""
	Mar 17 11:23:22 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:23:22.561305488Z" level=error msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\" failed, error" error="failed to setup network for sandbox \"5d5c07395b92048bfa9733925096626982a6abe18a0a9c78e78f33b6596999dc\": failed to find network info for sandbox \"5d5c07395b92048bfa9733925096626982a6abe18a0a9c78e78f33b6596999dc\""
	Mar 17 11:23:35 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:23:35.542448318Z" level=info msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\""
	Mar 17 11:23:35 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:23:35.560907833Z" level=error msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\" failed, error" error="failed to setup network for sandbox \"1e144837516a72028925985d36c4a32ee7e2a6baede6216e3c3c158897191aa3\": failed to find network info for sandbox \"1e144837516a72028925985d36c4a32ee7e2a6baede6216e3c3c158897191aa3\""
	Mar 17 11:23:49 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:23:49.542246123Z" level=info msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\""
	Mar 17 11:23:49 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:23:49.561064695Z" level=error msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\" failed, error" error="failed to setup network for sandbox \"d0fa6b8dfb988c6bd6fb9b0110f96632dc63d4e90b142b99ef75b2fe514e6bfe\": failed to find network info for sandbox \"d0fa6b8dfb988c6bd6fb9b0110f96632dc63d4e90b142b99ef75b2fe514e6bfe\""
	Mar 17 11:24:00 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:24:00.542164004Z" level=info msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\""
	Mar 17 11:24:00 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:24:00.561799071Z" level=error msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\" failed, error" error="failed to setup network for sandbox \"6069a52c6029a0e7b9f05d19c5d0e1ae024cc615a815c98694dc1e9030491d6b\": failed to find network info for sandbox \"6069a52c6029a0e7b9f05d19c5d0e1ae024cc615a815c98694dc1e9030491d6b\""
	Mar 17 11:24:12 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:24:12.541998317Z" level=info msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\""
	Mar 17 11:24:12 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:24:12.561911168Z" level=error msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\" failed, error" error="failed to setup network for sandbox \"8f99254f48489344e691aa07a60394edc1e687055c7b6612b3ad081dbd2cf505\": failed to find network info for sandbox \"8f99254f48489344e691aa07a60394edc1e687055c7b6612b3ad081dbd2cf505\""
	Mar 17 11:24:23 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:24:23.542240948Z" level=info msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\""
	Mar 17 11:24:23 old-k8s-version-702762 containerd[975]: time="2025-03-17T11:24:23.561287315Z" level=error msg="RunPodSandbox for name:\"coredns-74ff55c5b-f5872\" uid:\"6446de53-94b7-40a7-a689-e22a9a58c27b\" namespace:\"kube-system\" failed, error" error="failed to setup network for sandbox \"918686dcee4d2f50f78b3f25998c6bdc70830e38d226da009d590ad40661f895\": failed to find network info for sandbox \"918686dcee4d2f50f78b3f25998c6bdc70830e38d226da009d590ad40661f895\""
	
	
	==> describe nodes <==
	Name:               old-k8s-version-702762
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-702762
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76
	                    minikube.k8s.io/name=old-k8s-version-702762
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_03_17T11_14_20_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 11:14:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-702762
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 11:24:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 11:19:35 +0000   Mon, 17 Mar 2025 11:14:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 11:19:35 +0000   Mon, 17 Mar 2025 11:14:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 11:19:35 +0000   Mon, 17 Mar 2025 11:14:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 11:19:35 +0000   Mon, 17 Mar 2025 11:14:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-702762
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	System Info:
	  Machine ID:                 445a07d1f8694e1eb67b1662b85679d5
	  System UUID:                27d1b999-798f-4303-86df-446407042fd2
	  Boot ID:                    6cdff8eb-9dff-46dc-b46a-15af38578335
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.25
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-74ff55c5b-f5872                           100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m55s
	  kube-system                 etcd-old-k8s-version-702762                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-qhsp2                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m55s
	  kube-system                 kube-apiserver-old-k8s-version-702762             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-old-k8s-version-702762    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-l5hsd                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 kube-scheduler-old-k8s-version-702762             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  10m (x5 over 10m)  kubelet     Node old-k8s-version-702762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x5 over 10m)  kubelet     Node old-k8s-version-702762 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                kubelet     Node old-k8s-version-702762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet     Node old-k8s-version-702762 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet     Node old-k8s-version-702762 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m56s              kubelet     Node old-k8s-version-702762 status is now: NodeReady
	  Normal  Starting                 9m54s              kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2a 9f 34 c1 3c 2d 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ea db 01 46 f3 5d 08 06
	[Mar17 11:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 03 06 1a ae 04 08 06
	[Mar17 11:11] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ba d0 41 5a 57 08 06
	[  +0.000337] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 03 06 1a ae 04 08 06
	[ +43.804696] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 68 f0 20 09 1d 08 06
	[  +0.014204] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 35 88 eb 1a ca 08 06
	[Mar17 11:12] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 40 5e e0 f5 10 08 06
	[  +0.000328] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff da 68 f0 20 09 1d 08 06
	[Mar17 11:13] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9e 9d fa 19 03 e5 08 06
	[  +0.000467] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7a 6b 3f 12 54 e7 08 06
	[Mar17 11:14] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 15 0b 3c 2b d0 08 06
	[  +0.000401] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7a 6b 3f 12 54 e7 08 06
	
	
	==> etcd [dfb07a4e8ade42dce7c7c126f3f1897f64989b7b5be5fc8c3573b4b2e8dcaf2f] <==
	2025-03-17 11:20:48.700619 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-03-17 11:20:58.700790 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-03-17 11:21:08.700652 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-03-17 11:21:18.700698 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-03-17 11:21:28.700684 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-03-17 11:21:38.700560 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-03-17 11:21:48.700649 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-03-17 11:21:58.700618 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-03-17 11:22:08.700595 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-03-17 11:22:18.700611 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-03-17 11:22:28.700637 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-03-17 11:22:38.700647 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-03-17 11:22:48.700558 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-03-17 11:22:58.701576 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-03-17 11:23:08.700577 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-03-17 11:23:18.700635 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-03-17 11:23:28.700640 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-03-17 11:23:38.700673 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-03-17 11:23:48.700614 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-03-17 11:23:58.700585 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-03-17 11:24:08.700667 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-03-17 11:24:14.448760 I | mvcc: store.index: compact 722
	2025-03-17 11:24:14.449707 I | mvcc: finished scheduled compaction at 722 (took 714.894µs)
	2025-03-17 11:24:18.700608 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-03-17 11:24:28.700639 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 11:24:31 up  1:06,  0 users,  load average: 0.29, 0.84, 1.20
	Linux old-k8s-version-702762 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [47066ea1751e9e20dd6a74d3a99c7f36513aa5d027d2802ec3f01e80f93fbc41] <==
	I0317 11:19:06.077053       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0317 11:19:48.354993       1 client.go:360] parsed scheme: "passthrough"
	I0317 11:19:48.355033       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0317 11:19:48.355045       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0317 11:20:21.982039       1 client.go:360] parsed scheme: "passthrough"
	I0317 11:20:21.982084       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0317 11:20:21.982093       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0317 11:20:59.325533       1 client.go:360] parsed scheme: "passthrough"
	I0317 11:20:59.325584       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0317 11:20:59.325595       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0317 11:21:35.893420       1 client.go:360] parsed scheme: "passthrough"
	I0317 11:21:35.893462       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0317 11:21:35.893470       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0317 11:22:18.338686       1 client.go:360] parsed scheme: "passthrough"
	I0317 11:22:18.338729       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0317 11:22:18.338739       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0317 11:22:51.840040       1 client.go:360] parsed scheme: "passthrough"
	I0317 11:22:51.840079       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0317 11:22:51.840086       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0317 11:23:33.710815       1 client.go:360] parsed scheme: "passthrough"
	I0317 11:23:33.710871       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0317 11:23:33.710881       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0317 11:24:12.514688       1 client.go:360] parsed scheme: "passthrough"
	I0317 11:24:12.514727       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0317 11:24:12.514734       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [344bb10a5d4266cae18772028c99c4d22380f44d96ad9df7167017a219b8fd72] <==
	I0317 11:14:36.037401       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0317 11:14:36.044192       1 shared_informer.go:247] Caches are synced for GC 
	I0317 11:14:36.044395       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0317 11:14:36.045084       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0317 11:14:36.062727       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-f5872"
	I0317 11:14:36.062812       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-l5hsd"
	I0317 11:14:36.065906       1 range_allocator.go:373] Set node old-k8s-version-702762 PodCIDR to [10.244.0.0/24]
	I0317 11:14:36.066432       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-qhsp2"
	I0317 11:14:36.094550       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0317 11:14:36.097181       1 shared_informer.go:247] Caches are synced for resource quota 
	I0317 11:14:36.099086       1 shared_informer.go:247] Caches are synced for resource quota 
	E0317 11:14:36.108556       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0317 11:14:36.109731       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-mm622"
	I0317 11:14:36.123782       1 shared_informer.go:247] Caches are synced for endpoint 
	I0317 11:14:36.155497       1 shared_informer.go:247] Caches are synced for attach detach 
	I0317 11:14:36.165507       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0317 11:14:36.185928       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	E0317 11:14:36.275528       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	E0317 11:14:36.326925       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0317 11:14:36.362420       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0317 11:14:36.665174       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0317 11:14:36.703412       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0317 11:14:36.703448       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0317 11:14:37.519663       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0317 11:14:37.524900       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-mm622"
	
	
	==> kube-proxy [66cb4bfe01314cc7b4b02ff61b35d6e975585645fb7d4e84830af03ea85f5e12] <==
	I0317 11:14:37.109949       1 node.go:172] Successfully retrieved node IP: 192.168.94.2
	I0317 11:14:37.110024       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.94.2), assume IPv4 operation
	W0317 11:14:37.131539       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0317 11:14:37.131698       1 server_others.go:185] Using iptables Proxier.
	I0317 11:14:37.132402       1 server.go:650] Version: v1.20.0
	I0317 11:14:37.133168       1 config.go:315] Starting service config controller
	I0317 11:14:37.133190       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0317 11:14:37.133209       1 config.go:224] Starting endpoint slice config controller
	I0317 11:14:37.133213       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0317 11:14:37.233321       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0317 11:14:37.233380       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [cfbb7f23faf719e3ee66d1df205cf2273ff01afc6fb18222ffd416860d1d5827] <==
	I0317 11:14:14.029757       1 serving.go:331] Generated self-signed cert in-memory
	W0317 11:14:17.325776       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0317 11:14:17.325808       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0317 11:14:17.325824       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0317 11:14:17.325834       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0317 11:14:17.417047       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0317 11:14:17.417075       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0317 11:14:17.417648       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0317 11:14:17.417711       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0317 11:14:17.420805       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0317 11:14:17.420913       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0317 11:14:17.421014       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0317 11:14:17.421414       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0317 11:14:17.422057       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0317 11:14:17.427349       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0317 11:14:17.428467       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0317 11:14:17.428586       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0317 11:14:17.428630       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0317 11:14:17.428698       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0317 11:14:17.432259       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0317 11:14:17.432448       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0317 11:14:18.405608       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0317 11:14:18.461741       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0317 11:14:18.474409       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0317 11:14:20.817229       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Mar 17 11:23:23 old-k8s-version-702762 kubelet[2107]: E0317 11:23:23.542550    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Mar 17 11:23:35 old-k8s-version-702762 kubelet[2107]: E0317 11:23:35.542644    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Mar 17 11:23:35 old-k8s-version-702762 kubelet[2107]: E0317 11:23:35.561141    2107 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "1e144837516a72028925985d36c4a32ee7e2a6baede6216e3c3c158897191aa3": failed to find network info for sandbox "1e144837516a72028925985d36c4a32ee7e2a6baede6216e3c3c158897191aa3"
	Mar 17 11:23:35 old-k8s-version-702762 kubelet[2107]: E0317 11:23:35.561212    2107 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "1e144837516a72028925985d36c4a32ee7e2a6baede6216e3c3c158897191aa3": failed to find network info for sandbox "1e144837516a72028925985d36c4a32ee7e2a6baede6216e3c3c158897191aa3"
	Mar 17 11:23:35 old-k8s-version-702762 kubelet[2107]: E0317 11:23:35.561226    2107 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "1e144837516a72028925985d36c4a32ee7e2a6baede6216e3c3c158897191aa3": failed to find network info for sandbox "1e144837516a72028925985d36c4a32ee7e2a6baede6216e3c3c158897191aa3"
	Mar 17 11:23:35 old-k8s-version-702762 kubelet[2107]: E0317 11:23:35.561277    2107 pod_workers.go:191] Error syncing pod 6446de53-94b7-40a7-a689-e22a9a58c27b ("coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"1e144837516a72028925985d36c4a32ee7e2a6baede6216e3c3c158897191aa3\": failed to find network info for sandbox \"1e144837516a72028925985d36c4a32ee7e2a6baede6216e3c3c158897191aa3\""
	Mar 17 11:23:48 old-k8s-version-702762 kubelet[2107]: E0317 11:23:48.542452    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Mar 17 11:23:49 old-k8s-version-702762 kubelet[2107]: E0317 11:23:49.561346    2107 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "d0fa6b8dfb988c6bd6fb9b0110f96632dc63d4e90b142b99ef75b2fe514e6bfe": failed to find network info for sandbox "d0fa6b8dfb988c6bd6fb9b0110f96632dc63d4e90b142b99ef75b2fe514e6bfe"
	Mar 17 11:23:49 old-k8s-version-702762 kubelet[2107]: E0317 11:23:49.561410    2107 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "d0fa6b8dfb988c6bd6fb9b0110f96632dc63d4e90b142b99ef75b2fe514e6bfe": failed to find network info for sandbox "d0fa6b8dfb988c6bd6fb9b0110f96632dc63d4e90b142b99ef75b2fe514e6bfe"
	Mar 17 11:23:49 old-k8s-version-702762 kubelet[2107]: E0317 11:23:49.561423    2107 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "d0fa6b8dfb988c6bd6fb9b0110f96632dc63d4e90b142b99ef75b2fe514e6bfe": failed to find network info for sandbox "d0fa6b8dfb988c6bd6fb9b0110f96632dc63d4e90b142b99ef75b2fe514e6bfe"
	Mar 17 11:23:49 old-k8s-version-702762 kubelet[2107]: E0317 11:23:49.561486    2107 pod_workers.go:191] Error syncing pod 6446de53-94b7-40a7-a689-e22a9a58c27b ("coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"d0fa6b8dfb988c6bd6fb9b0110f96632dc63d4e90b142b99ef75b2fe514e6bfe\": failed to find network info for sandbox \"d0fa6b8dfb988c6bd6fb9b0110f96632dc63d4e90b142b99ef75b2fe514e6bfe\""
	Mar 17 11:24:00 old-k8s-version-702762 kubelet[2107]: E0317 11:24:00.562069    2107 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6069a52c6029a0e7b9f05d19c5d0e1ae024cc615a815c98694dc1e9030491d6b": failed to find network info for sandbox "6069a52c6029a0e7b9f05d19c5d0e1ae024cc615a815c98694dc1e9030491d6b"
	Mar 17 11:24:00 old-k8s-version-702762 kubelet[2107]: E0317 11:24:00.562131    2107 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6069a52c6029a0e7b9f05d19c5d0e1ae024cc615a815c98694dc1e9030491d6b": failed to find network info for sandbox "6069a52c6029a0e7b9f05d19c5d0e1ae024cc615a815c98694dc1e9030491d6b"
	Mar 17 11:24:00 old-k8s-version-702762 kubelet[2107]: E0317 11:24:00.562153    2107 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6069a52c6029a0e7b9f05d19c5d0e1ae024cc615a815c98694dc1e9030491d6b": failed to find network info for sandbox "6069a52c6029a0e7b9f05d19c5d0e1ae024cc615a815c98694dc1e9030491d6b"
	Mar 17 11:24:00 old-k8s-version-702762 kubelet[2107]: E0317 11:24:00.562204    2107 pod_workers.go:191] Error syncing pod 6446de53-94b7-40a7-a689-e22a9a58c27b ("coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"6069a52c6029a0e7b9f05d19c5d0e1ae024cc615a815c98694dc1e9030491d6b\": failed to find network info for sandbox \"6069a52c6029a0e7b9f05d19c5d0e1ae024cc615a815c98694dc1e9030491d6b\""
	Mar 17 11:24:02 old-k8s-version-702762 kubelet[2107]: E0317 11:24:02.542358    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Mar 17 11:24:12 old-k8s-version-702762 kubelet[2107]: E0317 11:24:12.562220    2107 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "8f99254f48489344e691aa07a60394edc1e687055c7b6612b3ad081dbd2cf505": failed to find network info for sandbox "8f99254f48489344e691aa07a60394edc1e687055c7b6612b3ad081dbd2cf505"
	Mar 17 11:24:12 old-k8s-version-702762 kubelet[2107]: E0317 11:24:12.562296    2107 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "8f99254f48489344e691aa07a60394edc1e687055c7b6612b3ad081dbd2cf505": failed to find network info for sandbox "8f99254f48489344e691aa07a60394edc1e687055c7b6612b3ad081dbd2cf505"
	Mar 17 11:24:12 old-k8s-version-702762 kubelet[2107]: E0317 11:24:12.562317    2107 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "8f99254f48489344e691aa07a60394edc1e687055c7b6612b3ad081dbd2cf505": failed to find network info for sandbox "8f99254f48489344e691aa07a60394edc1e687055c7b6612b3ad081dbd2cf505"
	Mar 17 11:24:12 old-k8s-version-702762 kubelet[2107]: E0317 11:24:12.562391    2107 pod_workers.go:191] Error syncing pod 6446de53-94b7-40a7-a689-e22a9a58c27b ("coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"8f99254f48489344e691aa07a60394edc1e687055c7b6612b3ad081dbd2cf505\": failed to find network info for sandbox \"8f99254f48489344e691aa07a60394edc1e687055c7b6612b3ad081dbd2cf505\""
	Mar 17 11:24:17 old-k8s-version-702762 kubelet[2107]: E0317 11:24:17.542404    2107 pod_workers.go:191] Error syncing pod 57e41c3b-76bc-47e0-b204-638d30f47ab4 ("kindnet-qhsp2_kube-system(57e41c3b-76bc-47e0-b204-638d30f47ab4)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Mar 17 11:24:23 old-k8s-version-702762 kubelet[2107]: E0317 11:24:23.561530    2107 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "918686dcee4d2f50f78b3f25998c6bdc70830e38d226da009d590ad40661f895": failed to find network info for sandbox "918686dcee4d2f50f78b3f25998c6bdc70830e38d226da009d590ad40661f895"
	Mar 17 11:24:23 old-k8s-version-702762 kubelet[2107]: E0317 11:24:23.561600    2107 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "918686dcee4d2f50f78b3f25998c6bdc70830e38d226da009d590ad40661f895": failed to find network info for sandbox "918686dcee4d2f50f78b3f25998c6bdc70830e38d226da009d590ad40661f895"
	Mar 17 11:24:23 old-k8s-version-702762 kubelet[2107]: E0317 11:24:23.561616    2107 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "918686dcee4d2f50f78b3f25998c6bdc70830e38d226da009d590ad40661f895": failed to find network info for sandbox "918686dcee4d2f50f78b3f25998c6bdc70830e38d226da009d590ad40661f895"
	Mar 17 11:24:23 old-k8s-version-702762 kubelet[2107]: E0317 11:24:23.561675    2107 pod_workers.go:191] Error syncing pod 6446de53-94b7-40a7-a689-e22a9a58c27b ("coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-f5872_kube-system(6446de53-94b7-40a7-a689-e22a9a58c27b)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"918686dcee4d2f50f78b3f25998c6bdc70830e38d226da009d590ad40661f895\": failed to find network info for sandbox \"918686dcee4d2f50f78b3f25998c6bdc70830e38d226da009d590ad40661f895\""
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-702762 -n old-k8s-version-702762
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-702762 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-74ff55c5b-f5872 kindnet-qhsp2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-702762 describe pod coredns-74ff55c5b-f5872 kindnet-qhsp2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-702762 describe pod coredns-74ff55c5b-f5872 kindnet-qhsp2: exit status 1 (60.250808ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-74ff55c5b-f5872" not found
	Error from server (NotFound): pods "kindnet-qhsp2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-702762 describe pod coredns-74ff55c5b-f5872 kindnet-qhsp2: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (646.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (609.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-189670 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
E0317 11:14:43.850731   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/custom-flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:14:43.857137   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/custom-flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:14:43.868538   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/custom-flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:14:43.889954   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/custom-flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:14:43.931403   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/custom-flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:14:44.012913   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/custom-flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:14:44.175872   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/custom-flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:14:44.177059   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:14:44.497730   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/custom-flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:14:45.139983   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/custom-flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:14:46.422296   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/custom-flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:14:48.984483   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/custom-flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:14:54.106707   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/custom-flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:15:04.348739   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/custom-flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:15:24.830586   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/custom-flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:15:50.080341   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:15:50.086735   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:15:50.098088   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:15:50.119403   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:15:50.160772   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:15:50.242217   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:15:50.403717   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:15:50.725355   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:15:51.367401   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:15:52.649575   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:15:55.211452   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:16:00.333280   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:16:05.792041   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/custom-flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:16:10.575428   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:16:31.056885   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:17:12.019338   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:17:27.714163   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/custom-flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:17:29.943224   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/bridge-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:17:29.949557   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/bridge-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:17:29.960930   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/bridge-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:17:29.982251   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/bridge-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:17:30.023597   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/bridge-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:17:30.105174   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/bridge-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:17:30.266719   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/bridge-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:17:30.588360   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/bridge-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:17:31.229775   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/bridge-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:17:32.511770   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/bridge-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:17:35.073079   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/bridge-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:17:40.195338   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/bridge-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:17:50.436648   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/bridge-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:18:10.918481   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/bridge-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:18:19.528477   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/auto-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:18:19.534846   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/auto-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:18:19.546205   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/auto-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:18:19.567568   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/auto-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:18:19.608953   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/auto-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:18:19.690412   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/auto-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:18:19.851919   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/auto-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:18:20.173788   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/auto-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:18:20.815642   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/auto-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:18:22.097199   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/auto-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:18:24.659003   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/auto-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:18:29.780802   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/auto-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:18:33.941579   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:18:40.022301   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/auto-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:18:51.880483   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/bridge-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:19:00.503863   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/auto-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:19:08.330263   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/enable-default-cni-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:19:08.337237   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/enable-default-cni-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:19:08.348560   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/enable-default-cni-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:19:08.369892   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/enable-default-cni-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:19:08.411286   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/enable-default-cni-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:19:08.492676   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/enable-default-cni-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:19:08.654144   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/enable-default-cni-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:19:08.976436   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/enable-default-cni-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:19:09.617879   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/enable-default-cni-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:19:10.899956   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/enable-default-cni-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:19:13.461902   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/enable-default-cni-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:19:14.789253   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:19:18.584056   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/enable-default-cni-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:19:28.826207   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/enable-default-cni-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:19:41.465295   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/auto-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:19:43.849869   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/custom-flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:19:44.177555   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:19:49.308672   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/enable-default-cni-236437/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-189670 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: exit status 80 (10m7.954486437s)

                                                
                                                
-- stdout --
	* [no-preload-189670] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20535
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-189670" primary control-plane node in "no-preload-189670" cluster
	* Pulling base image v0.0.46-1741860993-20523 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 11:14:37.573037  326404 out.go:345] Setting OutFile to fd 1 ...
	I0317 11:14:37.573149  326404 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:14:37.573158  326404 out.go:358] Setting ErrFile to fd 2...
	I0317 11:14:37.573162  326404 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:14:37.573331  326404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
	I0317 11:14:37.573882  326404 out.go:352] Setting JSON to false
	I0317 11:14:37.575080  326404 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3371,"bootTime":1742206707,"procs":324,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 11:14:37.575131  326404 start.go:139] virtualization: kvm guest
	I0317 11:14:37.577112  326404 out.go:177] * [no-preload-189670] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 11:14:37.578733  326404 out.go:177]   - MINIKUBE_LOCATION=20535
	I0317 11:14:37.578754  326404 notify.go:220] Checking for updates...
	I0317 11:14:37.581037  326404 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 11:14:37.582274  326404 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 11:14:37.583417  326404 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	I0317 11:14:37.584408  326404 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 11:14:37.585465  326404 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 11:14:37.587371  326404 config.go:182] Loaded profile config "calico-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:14:37.587577  326404 config.go:182] Loaded profile config "kindnet-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:14:37.587744  326404 config.go:182] Loaded profile config "old-k8s-version-702762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0317 11:14:37.587949  326404 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 11:14:37.613546  326404 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 11:14:37.613683  326404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 11:14:37.675807  326404 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-03-17 11:14:37.666983222 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 11:14:37.675909  326404 docker.go:318] overlay module found
	I0317 11:14:37.677458  326404 out.go:177] * Using the docker driver based on user configuration
	I0317 11:14:37.678436  326404 start.go:297] selected driver: docker
	I0317 11:14:37.678450  326404 start.go:901] validating driver "docker" against <nil>
	I0317 11:14:37.678460  326404 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 11:14:37.679278  326404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 11:14:37.754494  326404 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-03-17 11:14:37.745980166 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 11:14:37.754686  326404 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 11:14:37.754912  326404 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 11:14:37.756421  326404 out.go:177] * Using Docker driver with root privileges
	I0317 11:14:37.757449  326404 cni.go:84] Creating CNI manager for ""
	I0317 11:14:37.757520  326404 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 11:14:37.757535  326404 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0317 11:14:37.757611  326404 start.go:340] cluster config:
	{Name:no-preload-189670 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-189670 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:14:37.759648  326404 out.go:177] * Starting "no-preload-189670" primary control-plane node in "no-preload-189670" cluster
	I0317 11:14:37.760598  326404 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0317 11:14:37.761616  326404 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0317 11:14:37.762612  326404 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:14:37.762641  326404 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0317 11:14:37.762747  326404 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/config.json ...
	I0317 11:14:37.762784  326404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/config.json: {Name:mk8065c7f9b3959a2ed20976c38cbb43d5bff03d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:14:37.762821  326404 cache.go:107] acquiring lock: {Name:mkbc3f64c16a862cae480018f3fe3b01ad573de9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 11:14:37.762830  326404 cache.go:107] acquiring lock: {Name:mk146d98d5737c7f386bf46d08650fcedb50c933 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 11:14:37.762896  326404 cache.go:107] acquiring lock: {Name:mk6ed2707461bfd20631e4b74a3ddad3a15d6d3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 11:14:37.762933  326404 cache.go:115] /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0317 11:14:37.762997  326404 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 183.225µs
	I0317 11:14:37.762965  326404 cache.go:107] acquiring lock: {Name:mk11238d11ea0673f45baf045aea35a1297b06b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 11:14:37.762959  326404 cache.go:107] acquiring lock: {Name:mk499ae29c82eeb430765c506d10f9f0daf8ef70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 11:14:37.763030  326404 cache.go:107] acquiring lock: {Name:mk9bd70928951c29ddc0b6b1c1d9bd8e0ec9b0b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 11:14:37.763063  326404 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0317 11:14:37.763011  326404 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.2
	I0317 11:14:37.763018  326404 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0317 11:14:37.763131  326404 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.2
	I0317 11:14:37.763029  326404 cache.go:107] acquiring lock: {Name:mk5c8a74d0e57534669adcd4e14efd7de227bd6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 11:14:37.763125  326404 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0317 11:14:37.763174  326404 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.2
	I0317 11:14:37.763023  326404 cache.go:107] acquiring lock: {Name:mk70af36d4b62b5f0e6947867e31f11073768223 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 11:14:37.763386  326404 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0317 11:14:37.763422  326404 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0317 11:14:37.764282  326404 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0317 11:14:37.764342  326404 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.2
	I0317 11:14:37.764372  326404 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0317 11:14:37.764374  326404 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.2
	I0317 11:14:37.764461  326404 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0317 11:14:37.764618  326404 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.2
	I0317 11:14:37.764718  326404 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0317 11:14:37.784996  326404 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0317 11:14:37.785014  326404 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0317 11:14:37.785029  326404 cache.go:230] Successfully downloaded all kic artifacts
	I0317 11:14:37.785053  326404 start.go:360] acquireMachinesLock for no-preload-189670: {Name:mkf665253eb29201306a12d1fdbaf7092dfb72c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 11:14:37.785130  326404 start.go:364] duration metric: took 62.953µs to acquireMachinesLock for "no-preload-189670"
	I0317 11:14:37.785150  326404 start.go:93] Provisioning new machine with config: &{Name:no-preload-189670 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-189670 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 11:14:37.785207  326404 start.go:125] createHost starting for "" (driver="docker")
	I0317 11:14:37.787024  326404 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0317 11:14:37.787211  326404 start.go:159] libmachine.API.Create for "no-preload-189670" (driver="docker")
	I0317 11:14:37.787232  326404 client.go:168] LocalClient.Create starting
	I0317 11:14:37.787297  326404 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem
	I0317 11:14:37.787333  326404 main.go:141] libmachine: Decoding PEM data...
	I0317 11:14:37.787353  326404 main.go:141] libmachine: Parsing certificate...
	I0317 11:14:37.787403  326404 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem
	I0317 11:14:37.787426  326404 main.go:141] libmachine: Decoding PEM data...
	I0317 11:14:37.787436  326404 main.go:141] libmachine: Parsing certificate...
	I0317 11:14:37.787709  326404 cli_runner.go:164] Run: docker network inspect no-preload-189670 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0317 11:14:37.805605  326404 cli_runner.go:211] docker network inspect no-preload-189670 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0317 11:14:37.805671  326404 network_create.go:284] running [docker network inspect no-preload-189670] to gather additional debugging logs...
	I0317 11:14:37.805686  326404 cli_runner.go:164] Run: docker network inspect no-preload-189670
	W0317 11:14:37.822905  326404 cli_runner.go:211] docker network inspect no-preload-189670 returned with exit code 1
	I0317 11:14:37.822942  326404 network_create.go:287] error running [docker network inspect no-preload-189670]: docker network inspect no-preload-189670: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-189670 not found
	I0317 11:14:37.822955  326404 network_create.go:289] output of [docker network inspect no-preload-189670]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-189670 not found
	
	** /stderr **
	I0317 11:14:37.823059  326404 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 11:14:37.841272  326404 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6a2ef9d4bc68 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9a:4d:91:26:57:2c} reservation:<nil>}
	I0317 11:14:37.842241  326404 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-00bf62ef0133 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:c5:34:86:d6:21} reservation:<nil>}
	I0317 11:14:37.843181  326404 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-81e0001ceae7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:6a:cf:1c:79:e6} reservation:<nil>}
	I0317 11:14:37.843723  326404 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-16edb2a113e3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d6:59:06:a9:a8:e8} reservation:<nil>}
	I0317 11:14:37.844668  326404 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-a81c203e078d IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:76:61:16:ca:ff:e4} reservation:<nil>}
	I0317 11:14:37.845509  326404 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-ea0054525d5e IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:5e:9f:de:3b:52:f4} reservation:<nil>}
	I0317 11:14:37.846404  326404 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002439e00}
	I0317 11:14:37.846433  326404 network_create.go:124] attempt to create docker network no-preload-189670 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0317 11:14:37.846478  326404 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-189670 no-preload-189670
	I0317 11:14:37.899000  326404 network_create.go:108] docker network no-preload-189670 192.168.103.0/24 created
	I0317 11:14:37.899031  326404 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-189670" container
	I0317 11:14:37.899106  326404 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0317 11:14:37.918148  326404 cli_runner.go:164] Run: docker volume create no-preload-189670 --label name.minikube.sigs.k8s.io=no-preload-189670 --label created_by.minikube.sigs.k8s.io=true
	I0317 11:14:37.937389  326404 oci.go:103] Successfully created a docker volume no-preload-189670
	I0317 11:14:37.937499  326404 cli_runner.go:164] Run: docker run --rm --name no-preload-189670-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-189670 --entrypoint /usr/bin/test -v no-preload-189670:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0317 11:14:37.953280  326404 cache.go:162] opening:  /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0317 11:14:37.958394  326404 cache.go:162] opening:  /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2
	I0317 11:14:37.964812  326404 cache.go:162] opening:  /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2
	I0317 11:14:37.981343  326404 cache.go:162] opening:  /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2
	I0317 11:14:37.990424  326404 cache.go:162] opening:  /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0317 11:14:37.992570  326404 cache.go:162] opening:  /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2
	I0317 11:14:38.112684  326404 cache.go:157] /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0317 11:14:38.112721  326404 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 349.838555ms
	I0317 11:14:38.112739  326404 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0317 11:14:38.124486  326404 cache.go:162] opening:  /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0317 11:14:38.415226  326404 cache.go:157] /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 exists
	I0317 11:14:38.415307  326404 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.2" -> "/home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2" took 652.276451ms
	I0317 11:14:38.415325  326404 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.2 -> /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 succeeded
	I0317 11:14:38.467573  326404 oci.go:107] Successfully prepared a docker volume no-preload-189670
	I0317 11:14:38.467603  326404 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	W0317 11:14:38.467737  326404 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0317 11:14:38.467851  326404 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0317 11:14:38.517959  326404 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-189670 --name no-preload-189670 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-189670 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-189670 --network no-preload-189670 --ip 192.168.103.2 --volume no-preload-189670:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0317 11:14:38.785874  326404 cli_runner.go:164] Run: docker container inspect no-preload-189670 --format={{.State.Running}}
	I0317 11:14:38.805268  326404 cli_runner.go:164] Run: docker container inspect no-preload-189670 --format={{.State.Status}}
	I0317 11:14:38.826455  326404 cli_runner.go:164] Run: docker exec no-preload-189670 stat /var/lib/dpkg/alternatives/iptables
	I0317 11:14:38.869464  326404 oci.go:144] the created container "no-preload-189670" has a running status.
	I0317 11:14:38.869493  326404 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20535-4918/.minikube/machines/no-preload-189670/id_rsa...
	I0317 11:14:39.112845  326404 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20535-4918/.minikube/machines/no-preload-189670/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0317 11:14:39.150152  326404 cli_runner.go:164] Run: docker container inspect no-preload-189670 --format={{.State.Status}}
	I0317 11:14:39.171806  326404 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0317 11:14:39.171828  326404 kic_runner.go:114] Args: [docker exec --privileged no-preload-189670 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0317 11:14:39.217131  326404 cli_runner.go:164] Run: docker container inspect no-preload-189670 --format={{.State.Status}}
	I0317 11:14:39.237954  326404 machine.go:93] provisionDockerMachine start ...
	I0317 11:14:39.238164  326404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-189670
	I0317 11:14:39.261283  326404 main.go:141] libmachine: Using SSH client type: native
	I0317 11:14:39.261576  326404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0317 11:14:39.261595  326404 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 11:14:39.451623  326404 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-189670
	
	I0317 11:14:39.451875  326404 ubuntu.go:169] provisioning hostname "no-preload-189670"
	I0317 11:14:39.452058  326404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-189670
	I0317 11:14:39.485272  326404 main.go:141] libmachine: Using SSH client type: native
	I0317 11:14:39.485553  326404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0317 11:14:39.485576  326404 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-189670 && echo "no-preload-189670" | sudo tee /etc/hostname
	I0317 11:14:39.612565  326404 cache.go:157] /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0317 11:14:39.612663  326404 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 1.849767354s
	I0317 11:14:39.612681  326404 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0317 11:14:39.619132  326404 cache.go:157] /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 exists
	I0317 11:14:39.619161  326404 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.2" -> "/home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2" took 1.856262082s
	I0317 11:14:39.619173  326404 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.2 -> /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 succeeded
	I0317 11:14:39.699149  326404 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-189670
	
	I0317 11:14:39.699307  326404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-189670
	I0317 11:14:39.727965  326404 main.go:141] libmachine: Using SSH client type: native
	I0317 11:14:39.728223  326404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0317 11:14:39.728244  326404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-189670' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-189670/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-189670' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 11:14:39.737424  326404 cache.go:157] /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 exists
	I0317 11:14:39.737456  326404 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.2" -> "/home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2" took 1.974649305s
	I0317 11:14:39.737474  326404 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.2 -> /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 succeeded
	I0317 11:14:39.814867  326404 cache.go:157] /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 exists
	I0317 11:14:39.814902  326404 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.2" -> "/home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2" took 2.051933196s
	I0317 11:14:39.814915  326404 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.2 -> /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 succeeded
	I0317 11:14:39.879322  326404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 11:14:39.879361  326404 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20535-4918/.minikube CaCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20535-4918/.minikube}
	I0317 11:14:39.879428  326404 ubuntu.go:177] setting up certificates
	I0317 11:14:39.879446  326404 provision.go:84] configureAuth start
	I0317 11:14:39.879515  326404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-189670
	I0317 11:14:39.898095  326404 provision.go:143] copyHostCerts
	I0317 11:14:39.898160  326404 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem, removing ...
	I0317 11:14:39.898172  326404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem
	I0317 11:14:39.898236  326404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem (1123 bytes)
	I0317 11:14:39.898332  326404 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem, removing ...
	I0317 11:14:39.898340  326404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem
	I0317 11:14:39.898364  326404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem (1679 bytes)
	I0317 11:14:39.898428  326404 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem, removing ...
	I0317 11:14:39.898436  326404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem
	I0317 11:14:39.898457  326404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem (1082 bytes)
	I0317 11:14:39.898517  326404 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem org=jenkins.no-preload-189670 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-189670]
	I0317 11:14:40.038219  326404 provision.go:177] copyRemoteCerts
	I0317 11:14:40.038294  326404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 11:14:40.038336  326404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-189670
	I0317 11:14:40.056112  326404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/no-preload-189670/id_rsa Username:docker}
	I0317 11:14:40.152440  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 11:14:40.177123  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0317 11:14:40.204573  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0317 11:14:40.211456  326404 cache.go:157] /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0317 11:14:40.211482  326404 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 2.448524004s
	I0317 11:14:40.211497  326404 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0317 11:14:40.211518  326404 cache.go:87] Successfully saved all images to host disk.
	I0317 11:14:40.227899  326404 provision.go:87] duration metric: took 348.430777ms to configureAuth
	I0317 11:14:40.227938  326404 ubuntu.go:193] setting minikube options for container-runtime
	I0317 11:14:40.228140  326404 config.go:182] Loaded profile config "no-preload-189670": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:14:40.228158  326404 machine.go:96] duration metric: took 990.09775ms to provisionDockerMachine
	I0317 11:14:40.228167  326404 client.go:171] duration metric: took 2.440929156s to LocalClient.Create
	I0317 11:14:40.228192  326404 start.go:167] duration metric: took 2.440979334s to libmachine.API.Create "no-preload-189670"
	I0317 11:14:40.228205  326404 start.go:293] postStartSetup for "no-preload-189670" (driver="docker")
	I0317 11:14:40.228242  326404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 11:14:40.228305  326404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 11:14:40.228349  326404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-189670
	I0317 11:14:40.246578  326404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/no-preload-189670/id_rsa Username:docker}
	I0317 11:14:40.344204  326404 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 11:14:40.347305  326404 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0317 11:14:40.347331  326404 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0317 11:14:40.347339  326404 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0317 11:14:40.347345  326404 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0317 11:14:40.347355  326404 filesync.go:126] Scanning /home/jenkins/minikube-integration/20535-4918/.minikube/addons for local assets ...
	I0317 11:14:40.347403  326404 filesync.go:126] Scanning /home/jenkins/minikube-integration/20535-4918/.minikube/files for local assets ...
	I0317 11:14:40.347467  326404 filesync.go:149] local asset: /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem -> 116902.pem in /etc/ssl/certs
	I0317 11:14:40.347552  326404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 11:14:40.355237  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem --> /etc/ssl/certs/116902.pem (1708 bytes)
	I0317 11:14:40.377329  326404 start.go:296] duration metric: took 149.110619ms for postStartSetup
	I0317 11:14:40.377647  326404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-189670
	I0317 11:14:40.396111  326404 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/config.json ...
	I0317 11:14:40.396376  326404 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 11:14:40.396414  326404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-189670
	I0317 11:14:40.413204  326404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/no-preload-189670/id_rsa Username:docker}
	I0317 11:14:40.507917  326404 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0317 11:14:40.512157  326404 start.go:128] duration metric: took 2.726937112s to createHost
	I0317 11:14:40.512179  326404 start.go:83] releasing machines lock for "no-preload-189670", held for 2.72703814s
	I0317 11:14:40.512228  326404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-189670
	I0317 11:14:40.530747  326404 ssh_runner.go:195] Run: cat /version.json
	I0317 11:14:40.530795  326404 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 11:14:40.530805  326404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-189670
	I0317 11:14:40.530853  326404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-189670
	I0317 11:14:40.549377  326404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/no-preload-189670/id_rsa Username:docker}
	I0317 11:14:40.549483  326404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/no-preload-189670/id_rsa Username:docker}
	I0317 11:14:40.717273  326404 ssh_runner.go:195] Run: systemctl --version
	I0317 11:14:40.721878  326404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0317 11:14:40.726046  326404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0317 11:14:40.748984  326404 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0317 11:14:40.749053  326404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 11:14:40.773834  326404 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0317 11:14:40.773863  326404 start.go:495] detecting cgroup driver to use...
	I0317 11:14:40.773900  326404 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0317 11:14:40.773950  326404 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 11:14:40.785071  326404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 11:14:40.795024  326404 docker.go:217] disabling cri-docker service (if available) ...
	I0317 11:14:40.795078  326404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 11:14:40.807018  326404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 11:14:40.820371  326404 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 11:14:40.897012  326404 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 11:14:40.978297  326404 docker.go:233] disabling docker service ...
	I0317 11:14:40.978358  326404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 11:14:40.997298  326404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 11:14:41.007435  326404 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 11:14:41.089131  326404 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 11:14:41.163991  326404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 11:14:41.174924  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 11:14:41.189904  326404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 11:14:41.198595  326404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 11:14:41.207650  326404 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 11:14:41.207714  326404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 11:14:41.216482  326404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:14:41.225405  326404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 11:14:41.234963  326404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:14:41.243858  326404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 11:14:41.252016  326404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 11:14:41.260457  326404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 11:14:41.268895  326404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 11:14:41.277787  326404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 11:14:41.285350  326404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 11:14:41.292479  326404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:14:41.371004  326404 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 11:14:41.443083  326404 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0317 11:14:41.443159  326404 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0317 11:14:41.447452  326404 start.go:563] Will wait 60s for crictl version
	I0317 11:14:41.447505  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:14:41.451204  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 11:14:41.485327  326404 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.25
	RuntimeApiVersion:  v1
	I0317 11:14:41.485397  326404 ssh_runner.go:195] Run: containerd --version
	I0317 11:14:41.508536  326404 ssh_runner.go:195] Run: containerd --version
	I0317 11:14:41.535834  326404 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
	I0317 11:14:41.537230  326404 cli_runner.go:164] Run: docker network inspect no-preload-189670 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 11:14:41.556730  326404 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0317 11:14:41.560312  326404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:14:41.570323  326404 kubeadm.go:883] updating cluster {Name:no-preload-189670 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-189670 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 11:14:41.570417  326404 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:14:41.570450  326404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 11:14:41.600756  326404 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0317 11:14:41.600781  326404 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.32.2 registry.k8s.io/kube-controller-manager:v1.32.2 registry.k8s.io/kube-scheduler:v1.32.2 registry.k8s.io/kube-proxy:v1.32.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.16-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0317 11:14:41.600821  326404 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:14:41.600834  326404 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.2
	I0317 11:14:41.600849  326404 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0317 11:14:41.600862  326404 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0317 11:14:41.600905  326404 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0317 11:14:41.600927  326404 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0317 11:14:41.600981  326404 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.2
	I0317 11:14:41.600916  326404 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.2
	I0317 11:14:41.602239  326404 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.2
	I0317 11:14:41.602296  326404 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.2
	I0317 11:14:41.602240  326404 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:14:41.602300  326404 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0317 11:14:41.602240  326404 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0317 11:14:41.602241  326404 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.2
	I0317 11:14:41.602243  326404 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0317 11:14:41.602240  326404 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0317 11:14:41.761219  326404 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.32.2" and sha "85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef"
	I0317 11:14:41.761296  326404 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.32.2
	I0317 11:14:41.761444  326404 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.11.3" and sha "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6"
	I0317 11:14:41.761498  326404 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.11.3
	I0317 11:14:41.767618  326404 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.32.2" and sha "d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d"
	I0317 11:14:41.767676  326404 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.32.2
	I0317 11:14:41.773566  326404 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.32.2" and sha "f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5"
	I0317 11:14:41.773635  326404 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.32.2
	I0317 11:14:41.774799  326404 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.5.16-0" and sha "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc"
	I0317 11:14:41.774858  326404 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.5.16-0
	I0317 11:14:41.784826  326404 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.32.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.32.2" does not exist at hash "85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef" in container runtime
	I0317 11:14:41.784872  326404 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.32.2
	I0317 11:14:41.784829  326404 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0317 11:14:41.784939  326404 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0317 11:14:41.784971  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:14:41.784917  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:14:41.789699  326404 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10" and sha "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136"
	I0317 11:14:41.789765  326404 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10
	I0317 11:14:41.791401  326404 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.32.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.32.2" does not exist at hash "d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d" in container runtime
	I0317 11:14:41.791452  326404 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.32.2
	I0317 11:14:41.791491  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:14:41.792909  326404 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.32.2" and sha "b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389"
	I0317 11:14:41.792963  326404 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.32.2
	I0317 11:14:41.801765  326404 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.32.2" needs transfer: "registry.k8s.io/kube-proxy:v1.32.2" does not exist at hash "f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5" in container runtime
	I0317 11:14:41.801819  326404 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.32.2
	I0317 11:14:41.801826  326404 cache_images.go:116] "registry.k8s.io/etcd:3.5.16-0" needs transfer: "registry.k8s.io/etcd:3.5.16-0" does not exist at hash "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" in container runtime
	I0317 11:14:41.801846  326404 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.16-0
	I0317 11:14:41.801862  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:14:41.801880  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:14:41.801926  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0317 11:14:41.801943  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.2
	I0317 11:14:41.815175  326404 cache_images.go:116] "registry.k8s.io/pause:3.10" needs transfer: "registry.k8s.io/pause:3.10" does not exist at hash "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136" in container runtime
	I0317 11:14:41.815226  326404 cri.go:218] Removing image: registry.k8s.io/pause:3.10
	I0317 11:14:41.815309  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:14:41.815352  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.2
	I0317 11:14:41.817362  326404 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.32.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.32.2" does not exist at hash "b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389" in container runtime
	I0317 11:14:41.817400  326404 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0317 11:14:41.817437  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:14:41.842460  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.2
	I0317 11:14:41.842502  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0317 11:14:41.842538  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0317 11:14:41.845368  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.2
	I0317 11:14:41.910966  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0317 11:14:41.911086  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.2
	I0317 11:14:41.911090  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.2
	I0317 11:14:42.016793  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0317 11:14:42.024513  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.2
	I0317 11:14:42.024609  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0317 11:14:42.025206  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.2
	I0317 11:14:42.027549  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0317 11:14:42.111729  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.2
	I0317 11:14:42.111831  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.2
	I0317 11:14:42.219298  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0317 11:14:42.225634  326404 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0317 11:14:42.225693  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.2
	I0317 11:14:42.225720  326404 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2
	I0317 11:14:42.225730  326404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0317 11:14:42.225794  326404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.2
	I0317 11:14:42.225827  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0317 11:14:42.310071  326404 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2
	I0317 11:14:42.310177  326404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.2
	I0317 11:14:42.310292  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.2
	I0317 11:14:42.320842  326404 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0317 11:14:42.320889  326404 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.3: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.3': No such file or directory
	I0317 11:14:42.320916  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 --> /var/lib/minikube/images/coredns_v1.11.3 (18571264 bytes)
	I0317 11:14:42.320939  326404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0
	I0317 11:14:42.409118  326404 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.32.2: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.32.2': No such file or directory
	I0317 11:14:42.409168  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 --> /var/lib/minikube/images/kube-apiserver_v1.32.2 (28680704 bytes)
	I0317 11:14:42.409239  326404 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0317 11:14:42.409331  326404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10
	I0317 11:14:42.417170  326404 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2
	I0317 11:14:42.417259  326404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.2
	I0317 11:14:42.433325  326404 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2
	I0317 11:14:42.433340  326404 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.32.2: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.32.2': No such file or directory
	I0317 11:14:42.433358  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 --> /var/lib/minikube/images/kube-scheduler_v1.32.2 (20667904 bytes)
	I0317 11:14:42.433416  326404 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.16-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.16-0': No such file or directory
	I0317 11:14:42.433435  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 --> /var/lib/minikube/images/etcd_3.5.16-0 (57690112 bytes)
	I0317 11:14:42.433460  326404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.2
	I0317 11:14:42.468778  326404 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10: stat -c "%s %y" /var/lib/minikube/images/pause_3.10: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10': No such file or directory
	I0317 11:14:42.468819  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 --> /var/lib/minikube/images/pause_3.10 (321024 bytes)
	I0317 11:14:42.468822  326404 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.32.2: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.32.2': No such file or directory
	I0317 11:14:42.468779  326404 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.32.2: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.32.2': No such file or directory
	I0317 11:14:42.468848  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 --> /var/lib/minikube/images/kube-controller-manager_v1.32.2 (26269696 bytes)
	I0317 11:14:42.468855  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 --> /var/lib/minikube/images/kube-proxy_v1.32.2 (30910464 bytes)
	I0317 11:14:42.582406  326404 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10
	I0317 11:14:42.582481  326404 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10
	I0317 11:14:42.823332  326404 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 from cache
	I0317 11:14:42.823374  326404 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0317 11:14:42.823418  326404 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.11.3
	I0317 11:14:42.988059  326404 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0317 11:14:42.988130  326404 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:14:44.013464  326404 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.11.3: (1.190021555s)
	I0317 11:14:44.013491  326404 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0317 11:14:44.013510  326404 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.32.2
	I0317 11:14:44.013546  326404 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.32.2
	I0317 11:14:44.013549  326404 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5: (1.025393338s)
	I0317 11:14:44.013582  326404 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0317 11:14:44.013619  326404 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:14:44.013666  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:14:45.075607  326404 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.32.2: (1.062038686s)
	I0317 11:14:45.075632  326404 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 from cache
	I0317 11:14:45.075649  326404 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.32.2
	I0317 11:14:45.075658  326404 ssh_runner.go:235] Completed: which crictl: (1.061969948s)
	I0317 11:14:45.075691  326404 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.32.2
	I0317 11:14:45.075717  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:14:46.154662  326404 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.32.2: (1.078941342s)
	I0317 11:14:46.154694  326404 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 from cache
	I0317 11:14:46.154712  326404 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.32.2
	I0317 11:14:46.154721  326404 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.078982839s)
	I0317 11:14:46.154759  326404 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.32.2
	I0317 11:14:46.154779  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:14:47.131294  326404 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:14:47.131296  326404 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 from cache
	I0317 11:14:47.131388  326404 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.32.2
	I0317 11:14:47.131420  326404 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.32.2
	I0317 11:14:47.164373  326404 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0317 11:14:47.164454  326404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0317 11:14:48.242747  326404 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.32.2: (1.111305361s)
	I0317 11:14:48.242775  326404 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 from cache
	I0317 11:14:48.242819  326404 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.5.16-0
	I0317 11:14:48.242844  326404 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.078369897s)
	I0317 11:14:48.242859  326404 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.16-0
	I0317 11:14:48.242879  326404 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0317 11:14:48.242903  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0317 11:14:50.245417  326404 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.16-0: (2.002524152s)
	I0317 11:14:50.245447  326404 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 from cache
	I0317 11:14:50.245473  326404 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0317 11:14:50.245524  326404 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0317 11:14:50.631303  326404 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20535-4918/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0317 11:14:50.631341  326404 cache_images.go:123] Successfully loaded all cached images
	I0317 11:14:50.631347  326404 cache_images.go:92] duration metric: took 9.030552386s to LoadCachedImages
	I0317 11:14:50.631360  326404 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.32.2 containerd true true} ...
	I0317 11:14:50.631485  326404 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-189670 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:no-preload-189670 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 11:14:50.631564  326404 ssh_runner.go:195] Run: sudo crictl info
	I0317 11:14:50.665998  326404 cni.go:84] Creating CNI manager for ""
	I0317 11:14:50.666022  326404 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 11:14:50.666038  326404 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 11:14:50.666077  326404 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-189670 NodeName:no-preload-189670 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 11:14:50.666213  326404 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-189670"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 11:14:50.666292  326404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 11:14:50.674434  326404 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0317 11:14:50.674491  326404 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0317 11:14:50.683087  326404 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20535-4918/.minikube/cache/linux/amd64/v1.32.2/kubelet
	I0317 11:14:50.683127  326404 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20535-4918/.minikube/cache/linux/amd64/v1.32.2/kubeadm
	I0317 11:14:50.683131  326404 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0317 11:14:50.683340  326404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0317 11:14:50.686836  326404 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0317 11:14:50.686863  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0317 11:14:51.654094  326404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0317 11:14:51.658319  326404 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0317 11:14:51.658360  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/cache/linux/amd64/v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0317 11:14:51.761262  326404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 11:14:51.779328  326404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0317 11:14:51.796328  326404 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0317 11:14:51.796377  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0317 11:14:52.013847  326404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 11:14:52.023357  326404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0317 11:14:52.039708  326404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 11:14:52.056753  326404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2310 bytes)
	I0317 11:14:52.074174  326404 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0317 11:14:52.077829  326404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:14:52.088124  326404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:14:52.165784  326404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:14:52.179179  326404 certs.go:68] Setting up /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670 for IP: 192.168.103.2
	I0317 11:14:52.179203  326404 certs.go:194] generating shared ca certs ...
	I0317 11:14:52.179225  326404 certs.go:226] acquiring lock for ca certs: {Name:mkf58624c63680e02907d28348d45986283847c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:14:52.179436  326404 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20535-4918/.minikube/ca.key
	I0317 11:14:52.179494  326404 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.key
	I0317 11:14:52.179506  326404 certs.go:256] generating profile certs ...
	I0317 11:14:52.179575  326404 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/client.key
	I0317 11:14:52.179593  326404 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/client.crt with IP's: []
	I0317 11:14:52.290144  326404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/client.crt ...
	I0317 11:14:52.290173  326404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/client.crt: {Name:mkcf9d2404a39fa0ed16e6b7ced0a9cdc0fe557e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:14:52.290344  326404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/client.key ...
	I0317 11:14:52.290358  326404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/client.key: {Name:mk7aeca8c6be227240a3c92460e2f56fdc2878a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:14:52.290433  326404 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/apiserver.key.f117e927
	I0317 11:14:52.290447  326404 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/apiserver.crt.f117e927 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0317 11:14:52.416383  326404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/apiserver.crt.f117e927 ...
	I0317 11:14:52.416413  326404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/apiserver.crt.f117e927: {Name:mkefbbb066bd9fc464975c2b10a4fa8d37c276ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:14:52.416585  326404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/apiserver.key.f117e927 ...
	I0317 11:14:52.416612  326404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/apiserver.key.f117e927: {Name:mke636cbca3b959d669e5dfcafb77304703af271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:14:52.416715  326404 certs.go:381] copying /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/apiserver.crt.f117e927 -> /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/apiserver.crt
	I0317 11:14:52.416835  326404 certs.go:385] copying /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/apiserver.key.f117e927 -> /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/apiserver.key
	I0317 11:14:52.416924  326404 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/proxy-client.key
	I0317 11:14:52.416949  326404 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/proxy-client.crt with IP's: []
	I0317 11:14:52.764524  326404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/proxy-client.crt ...
	I0317 11:14:52.764557  326404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/proxy-client.crt: {Name:mk0c685511dacc059b42580e22af90f59aadc7a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:14:52.764740  326404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/proxy-client.key ...
	I0317 11:14:52.764759  326404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/proxy-client.key: {Name:mk2728c236ea85942dbcc92f140bf61038c80b54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:14:52.764955  326404 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690.pem (1338 bytes)
	W0317 11:14:52.765005  326404 certs.go:480] ignoring /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690_empty.pem, impossibly tiny 0 bytes
	I0317 11:14:52.765020  326404 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 11:14:52.765062  326404 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem (1082 bytes)
	I0317 11:14:52.765106  326404 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem (1123 bytes)
	I0317 11:14:52.765139  326404 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem (1679 bytes)
	I0317 11:14:52.765201  326404 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem (1708 bytes)
	I0317 11:14:52.765786  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 11:14:52.789367  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 11:14:52.813243  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 11:14:52.835569  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 11:14:52.857672  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0317 11:14:52.880062  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 11:14:52.902609  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 11:14:52.925070  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/no-preload-189670/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 11:14:52.947596  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 11:14:52.970515  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690.pem --> /usr/share/ca-certificates/11690.pem (1338 bytes)
	I0317 11:14:52.994523  326404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem --> /usr/share/ca-certificates/116902.pem (1708 bytes)
	I0317 11:14:53.019309  326404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 11:14:53.035756  326404 ssh_runner.go:195] Run: openssl version
	I0317 11:14:53.040744  326404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 11:14:53.050206  326404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:14:53.053509  326404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:14:53.053566  326404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:14:53.059965  326404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 11:14:53.069118  326404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11690.pem && ln -fs /usr/share/ca-certificates/11690.pem /etc/ssl/certs/11690.pem"
	I0317 11:14:53.078002  326404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11690.pem
	I0317 11:14:53.081656  326404 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:32 /usr/share/ca-certificates/11690.pem
	I0317 11:14:53.081701  326404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11690.pem
	I0317 11:14:53.088315  326404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11690.pem /etc/ssl/certs/51391683.0"
	I0317 11:14:53.097309  326404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116902.pem && ln -fs /usr/share/ca-certificates/116902.pem /etc/ssl/certs/116902.pem"
	I0317 11:14:53.106791  326404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116902.pem
	I0317 11:14:53.110119  326404 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:32 /usr/share/ca-certificates/116902.pem
	I0317 11:14:53.110176  326404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116902.pem
	I0317 11:14:53.116505  326404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116902.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 11:14:53.125016  326404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 11:14:53.128666  326404 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 11:14:53.128713  326404 kubeadm.go:392] StartCluster: {Name:no-preload-189670 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-189670 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:14:53.128776  326404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0317 11:14:53.128814  326404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 11:14:53.162390  326404 cri.go:89] found id: ""
	I0317 11:14:53.162475  326404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 11:14:53.170988  326404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 11:14:53.179384  326404 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0317 11:14:53.179443  326404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 11:14:53.187420  326404 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 11:14:53.187440  326404 kubeadm.go:157] found existing configuration files:
	
	I0317 11:14:53.187485  326404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 11:14:53.195524  326404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 11:14:53.195580  326404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 11:14:53.203547  326404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 11:14:53.212118  326404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 11:14:53.212164  326404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 11:14:53.220105  326404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 11:14:53.228274  326404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 11:14:53.228324  326404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 11:14:53.236164  326404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 11:14:53.244004  326404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 11:14:53.244060  326404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 11:14:53.251879  326404 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0317 11:14:53.287733  326404 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 11:14:53.287854  326404 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 11:14:53.303862  326404 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0317 11:14:53.303962  326404 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0317 11:14:53.304040  326404 kubeadm.go:310] OS: Linux
	I0317 11:14:53.304116  326404 kubeadm.go:310] CGROUPS_CPU: enabled
	I0317 11:14:53.304233  326404 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0317 11:14:53.304331  326404 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0317 11:14:53.304424  326404 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0317 11:14:53.304523  326404 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0317 11:14:53.304612  326404 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0317 11:14:53.304686  326404 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0317 11:14:53.304775  326404 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0317 11:14:53.304846  326404 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0317 11:14:53.357837  326404 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 11:14:53.358007  326404 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 11:14:53.358236  326404 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 11:14:53.363379  326404 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 11:14:53.365793  326404 out.go:235]   - Generating certificates and keys ...
	I0317 11:14:53.365910  326404 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 11:14:53.365990  326404 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 11:14:53.590116  326404 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 11:14:53.692864  326404 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 11:14:53.832833  326404 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 11:14:54.060092  326404 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 11:14:54.169430  326404 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 11:14:54.169609  326404 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-189670] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0317 11:14:54.405429  326404 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 11:14:54.405557  326404 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-189670] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0317 11:14:54.606290  326404 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 11:14:54.782544  326404 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 11:14:55.078747  326404 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 11:14:55.078859  326404 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 11:14:55.408547  326404 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 11:14:55.559284  326404 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 11:14:55.687886  326404 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 11:14:55.756360  326404 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 11:14:55.819080  326404 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 11:14:55.819676  326404 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 11:14:55.822845  326404 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 11:14:55.825245  326404 out.go:235]   - Booting up control plane ...
	I0317 11:14:55.825378  326404 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 11:14:55.825485  326404 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 11:14:55.825577  326404 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 11:14:55.836564  326404 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 11:14:55.842657  326404 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 11:14:55.842751  326404 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 11:14:55.930632  326404 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 11:14:55.930805  326404 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 11:14:56.932032  326404 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001527033s
	I0317 11:14:56.932153  326404 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 11:15:01.434149  326404 kubeadm.go:310] [api-check] The API server is healthy after 4.502015937s
	I0317 11:15:01.446266  326404 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 11:15:01.462382  326404 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 11:15:01.480885  326404 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 11:15:01.481156  326404 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-189670 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 11:15:01.489942  326404 kubeadm.go:310] [bootstrap-token] Using token: giz3uz.qfokuxgkymjxw763
	I0317 11:15:01.491505  326404 out.go:235]   - Configuring RBAC rules ...
	I0317 11:15:01.491669  326404 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 11:15:01.495041  326404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 11:15:01.501922  326404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 11:15:01.504910  326404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 11:15:01.507897  326404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 11:15:01.510522  326404 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 11:15:01.839926  326404 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 11:15:02.266086  326404 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 11:15:02.840110  326404 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 11:15:02.840996  326404 kubeadm.go:310] 
	I0317 11:15:02.841113  326404 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 11:15:02.841125  326404 kubeadm.go:310] 
	I0317 11:15:02.841234  326404 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 11:15:02.841244  326404 kubeadm.go:310] 
	I0317 11:15:02.841279  326404 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 11:15:02.841365  326404 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 11:15:02.841477  326404 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 11:15:02.841510  326404 kubeadm.go:310] 
	I0317 11:15:02.841568  326404 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 11:15:02.841584  326404 kubeadm.go:310] 
	I0317 11:15:02.841657  326404 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 11:15:02.841666  326404 kubeadm.go:310] 
	I0317 11:15:02.841748  326404 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 11:15:02.841863  326404 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 11:15:02.841978  326404 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 11:15:02.841990  326404 kubeadm.go:310] 
	I0317 11:15:02.842115  326404 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 11:15:02.842219  326404 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 11:15:02.842229  326404 kubeadm.go:310] 
	I0317 11:15:02.842346  326404 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token giz3uz.qfokuxgkymjxw763 \
	I0317 11:15:02.842513  326404 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fbbd8e832ea7aa08371d4fcc88b71c8e29c98bed7a9a4feed9bf5043f7b52578 \
	I0317 11:15:02.842541  326404 kubeadm.go:310] 	--control-plane 
	I0317 11:15:02.842550  326404 kubeadm.go:310] 
	I0317 11:15:02.842688  326404 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 11:15:02.842707  326404 kubeadm.go:310] 
	I0317 11:15:02.842812  326404 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token giz3uz.qfokuxgkymjxw763 \
	I0317 11:15:02.842934  326404 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fbbd8e832ea7aa08371d4fcc88b71c8e29c98bed7a9a4feed9bf5043f7b52578 
	I0317 11:15:02.845200  326404 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0317 11:15:02.845462  326404 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0317 11:15:02.845575  326404 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 11:15:02.845609  326404 cni.go:84] Creating CNI manager for ""
	I0317 11:15:02.845616  326404 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 11:15:02.847522  326404 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0317 11:15:02.849017  326404 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0317 11:15:02.853171  326404 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0317 11:15:02.853194  326404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0317 11:15:02.870617  326404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0317 11:15:03.078040  326404 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 11:15:03.078105  326404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:15:03.078156  326404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-189670 minikube.k8s.io/updated_at=2025_03_17T11_15_03_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76 minikube.k8s.io/name=no-preload-189670 minikube.k8s.io/primary=true
	I0317 11:15:03.153088  326404 ops.go:34] apiserver oom_adj: -16
	I0317 11:15:03.153104  326404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:15:03.653625  326404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:15:04.153971  326404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:15:04.653876  326404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:15:05.153279  326404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:15:05.653226  326404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:15:06.153255  326404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:15:06.653947  326404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:15:07.154182  326404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:15:07.220689  326404 kubeadm.go:1113] duration metric: took 4.142648769s to wait for elevateKubeSystemPrivileges
	I0317 11:15:07.220720  326404 kubeadm.go:394] duration metric: took 14.092012142s to StartCluster
	I0317 11:15:07.220740  326404 settings.go:142] acquiring lock: {Name:mk2a57d556efff40ccd4336229d7a78216b861f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:15:07.220815  326404 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 11:15:07.222061  326404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/kubeconfig: {Name:mk686b9f6159ab958672b945ae0aa5a9c96e9ecc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:15:07.222329  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 11:15:07.222336  326404 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 11:15:07.222417  326404 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 11:15:07.222508  326404 addons.go:69] Setting storage-provisioner=true in profile "no-preload-189670"
	I0317 11:15:07.222518  326404 addons.go:69] Setting default-storageclass=true in profile "no-preload-189670"
	I0317 11:15:07.222537  326404 config.go:182] Loaded profile config "no-preload-189670": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:15:07.222545  326404 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-189670"
	I0317 11:15:07.222528  326404 addons.go:238] Setting addon storage-provisioner=true in "no-preload-189670"
	I0317 11:15:07.222674  326404 host.go:66] Checking if "no-preload-189670" exists ...
	I0317 11:15:07.222958  326404 cli_runner.go:164] Run: docker container inspect no-preload-189670 --format={{.State.Status}}
	I0317 11:15:07.223117  326404 cli_runner.go:164] Run: docker container inspect no-preload-189670 --format={{.State.Status}}
	I0317 11:15:07.224351  326404 out.go:177] * Verifying Kubernetes components...
	I0317 11:15:07.225708  326404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:15:07.245693  326404 addons.go:238] Setting addon default-storageclass=true in "no-preload-189670"
	I0317 11:15:07.245727  326404 host.go:66] Checking if "no-preload-189670" exists ...
	I0317 11:15:07.246027  326404 cli_runner.go:164] Run: docker container inspect no-preload-189670 --format={{.State.Status}}
	I0317 11:15:07.247164  326404 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:15:07.248755  326404 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 11:15:07.248779  326404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 11:15:07.248840  326404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-189670
	I0317 11:15:07.272000  326404 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 11:15:07.272024  326404 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 11:15:07.272094  326404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-189670
	I0317 11:15:07.284273  326404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/no-preload-189670/id_rsa Username:docker}
	I0317 11:15:07.291342  326404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/no-preload-189670/id_rsa Username:docker}
	I0317 11:15:07.431092  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 11:15:07.523652  326404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:15:07.630409  326404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 11:15:07.633757  326404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 11:15:08.116937  326404 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0317 11:15:08.118334  326404 node_ready.go:35] waiting up to 6m0s for node "no-preload-189670" to be "Ready" ...
	I0317 11:15:08.127848  326404 node_ready.go:49] node "no-preload-189670" has status "Ready":"True"
	I0317 11:15:08.127878  326404 node_ready.go:38] duration metric: took 9.515639ms for node "no-preload-189670" to be "Ready" ...
	I0317 11:15:08.127891  326404 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:15:08.131124  326404 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace to be "Ready" ...
	I0317 11:15:08.425023  326404 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0317 11:15:08.426572  326404 addons.go:514] duration metric: took 1.204156927s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0317 11:15:08.621438  326404 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-189670" context rescaled to 1 replicas
	I0317 11:15:10.136367  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:12.136780  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:14.636977  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:17.135560  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:19.636062  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:21.636559  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:23.639066  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:26.136666  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:28.635706  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:30.635924  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:33.137314  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:35.636665  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:38.136724  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:40.136899  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:42.138061  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:44.636313  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:46.636346  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:49.135190  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:51.136059  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:53.636278  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:56.136371  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:15:58.635961  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:01.135078  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:03.136132  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:05.635645  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:07.635999  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:09.636369  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:12.136068  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:14.136774  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:16.635721  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:18.636454  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:20.636812  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:23.135753  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:25.135922  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:27.136155  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:29.635812  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:32.136996  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:34.635774  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:36.636534  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:39.135582  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:41.636523  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:44.136746  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:46.636255  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:48.636905  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:51.135637  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:53.137059  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:55.635695  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:57.635838  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:16:59.636532  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:02.135911  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:04.636576  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:06.636732  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:09.135731  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:11.635595  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:13.635747  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:16.135472  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:18.136026  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:20.136770  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:22.635831  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:24.636435  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:27.135904  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:29.635620  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:32.136135  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:34.636320  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:37.135889  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:39.136763  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:41.635662  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:43.636431  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:45.636570  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:48.136021  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:50.635628  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:53.136552  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:55.636398  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:17:57.636581  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:00.137012  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:02.636254  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:04.645370  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:07.136801  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:09.636686  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:12.136050  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:14.136189  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:16.136247  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:18.635912  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:20.636105  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:22.636626  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:24.636692  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:26.636934  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:29.135768  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:31.635865  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:33.636860  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:36.136947  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:38.636623  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:41.135880  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:43.636853  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:46.135902  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:48.136154  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:50.636003  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:52.636399  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:55.135661  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:57.136112  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:18:59.636229  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:19:02.135584  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:19:04.636496  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:19:06.636611  326404 pod_ready.go:103] pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace has status "Ready":"False"
	I0317 11:19:08.136672  326404 pod_ready.go:82] duration metric: took 4m0.005520065s for pod "coredns-668d6bf9bc-nrkfd" in "kube-system" namespace to be "Ready" ...
	E0317 11:19:08.136693  326404 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0317 11:19:08.136701  326404 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-p5rmf" in "kube-system" namespace to be "Ready" ...
	I0317 11:19:08.138423  326404 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-p5rmf" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-p5rmf" not found
	I0317 11:19:08.138445  326404 pod_ready.go:82] duration metric: took 1.737563ms for pod "coredns-668d6bf9bc-p5rmf" in "kube-system" namespace to be "Ready" ...
	E0317 11:19:08.138455  326404 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-p5rmf" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-p5rmf" not found
	I0317 11:19:08.138464  326404 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-189670" in "kube-system" namespace to be "Ready" ...
	I0317 11:19:08.141799  326404 pod_ready.go:93] pod "etcd-no-preload-189670" in "kube-system" namespace has status "Ready":"True"
	I0317 11:19:08.141815  326404 pod_ready.go:82] duration metric: took 3.344598ms for pod "etcd-no-preload-189670" in "kube-system" namespace to be "Ready" ...
	I0317 11:19:08.141825  326404 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-189670" in "kube-system" namespace to be "Ready" ...
	I0317 11:19:08.144886  326404 pod_ready.go:93] pod "kube-apiserver-no-preload-189670" in "kube-system" namespace has status "Ready":"True"
	I0317 11:19:08.144900  326404 pod_ready.go:82] duration metric: took 3.070211ms for pod "kube-apiserver-no-preload-189670" in "kube-system" namespace to be "Ready" ...
	I0317 11:19:08.144911  326404 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-189670" in "kube-system" namespace to be "Ready" ...
	I0317 11:19:08.147828  326404 pod_ready.go:93] pod "kube-controller-manager-no-preload-189670" in "kube-system" namespace has status "Ready":"True"
	I0317 11:19:08.147844  326404 pod_ready.go:82] duration metric: took 2.926335ms for pod "kube-controller-manager-no-preload-189670" in "kube-system" namespace to be "Ready" ...
	I0317 11:19:08.147851  326404 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dw92z" in "kube-system" namespace to be "Ready" ...
	I0317 11:19:08.335005  326404 pod_ready.go:93] pod "kube-proxy-dw92z" in "kube-system" namespace has status "Ready":"True"
	I0317 11:19:08.335026  326404 pod_ready.go:82] duration metric: took 187.168808ms for pod "kube-proxy-dw92z" in "kube-system" namespace to be "Ready" ...
	I0317 11:19:08.335037  326404 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-189670" in "kube-system" namespace to be "Ready" ...
	I0317 11:19:08.735971  326404 pod_ready.go:93] pod "kube-scheduler-no-preload-189670" in "kube-system" namespace has status "Ready":"True"
	I0317 11:19:08.735995  326404 pod_ready.go:82] duration metric: took 400.945529ms for pod "kube-scheduler-no-preload-189670" in "kube-system" namespace to be "Ready" ...
	I0317 11:19:08.736005  326404 pod_ready.go:39] duration metric: took 4m0.608089145s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:19:08.736029  326404 api_server.go:52] waiting for apiserver process to appear ...
	I0317 11:19:08.736082  326404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:19:08.736142  326404 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:19:08.769924  326404 cri.go:89] found id: "3309296ea8414c0fb74c0936fe4b0fcc6593b7bf1d784d4506a0ddd69e3ece3b"
	I0317 11:19:08.769943  326404 cri.go:89] found id: ""
	I0317 11:19:08.769950  326404 logs.go:282] 1 containers: [3309296ea8414c0fb74c0936fe4b0fcc6593b7bf1d784d4506a0ddd69e3ece3b]
	I0317 11:19:08.769992  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:19:08.773697  326404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:19:08.773758  326404 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:19:08.806668  326404 cri.go:89] found id: "9403b6e07fa0d9bff6a10a10b218514fab81e9cb4957d942fcf73e0ad8f038dd"
	I0317 11:19:08.806687  326404 cri.go:89] found id: ""
	I0317 11:19:08.806694  326404 logs.go:282] 1 containers: [9403b6e07fa0d9bff6a10a10b218514fab81e9cb4957d942fcf73e0ad8f038dd]
	I0317 11:19:08.806734  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:19:08.810115  326404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:19:08.810192  326404 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:19:08.844777  326404 cri.go:89] found id: ""
	I0317 11:19:08.844800  326404 logs.go:282] 0 containers: []
	W0317 11:19:08.844808  326404 logs.go:284] No container was found matching "coredns"
	I0317 11:19:08.844818  326404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:19:08.844867  326404 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:19:08.878949  326404 cri.go:89] found id: "b219bca090425646e62ca112c20f32106f2539942e5fc365f8237031e7c95c99"
	I0317 11:19:08.878970  326404 cri.go:89] found id: ""
	I0317 11:19:08.878978  326404 logs.go:282] 1 containers: [b219bca090425646e62ca112c20f32106f2539942e5fc365f8237031e7c95c99]
	I0317 11:19:08.879034  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:19:08.882457  326404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:19:08.882519  326404 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:19:08.914220  326404 cri.go:89] found id: "b0f53335c4a2ea5f2da7fa7af13194a131decea5b6a4bb229f9e627b4e81fa0e"
	I0317 11:19:08.914240  326404 cri.go:89] found id: ""
	I0317 11:19:08.914246  326404 logs.go:282] 1 containers: [b0f53335c4a2ea5f2da7fa7af13194a131decea5b6a4bb229f9e627b4e81fa0e]
	I0317 11:19:08.914297  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:19:08.917733  326404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:19:08.917784  326404 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:19:08.951544  326404 cri.go:89] found id: "330fa8830110af11165ed37adfb7690dd3a58adf52367139a00e09f674b88248"
	I0317 11:19:08.951564  326404 cri.go:89] found id: ""
	I0317 11:19:08.951570  326404 logs.go:282] 1 containers: [330fa8830110af11165ed37adfb7690dd3a58adf52367139a00e09f674b88248]
	I0317 11:19:08.951617  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:19:08.955138  326404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:19:08.955197  326404 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:19:08.989599  326404 cri.go:89] found id: ""
	I0317 11:19:08.989624  326404 logs.go:282] 0 containers: []
	W0317 11:19:08.989632  326404 logs.go:284] No container was found matching "kindnet"
	I0317 11:19:08.989653  326404 logs.go:123] Gathering logs for etcd [9403b6e07fa0d9bff6a10a10b218514fab81e9cb4957d942fcf73e0ad8f038dd] ...
	I0317 11:19:08.989665  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9403b6e07fa0d9bff6a10a10b218514fab81e9cb4957d942fcf73e0ad8f038dd"
	I0317 11:19:09.027070  326404 logs.go:123] Gathering logs for kube-scheduler [b219bca090425646e62ca112c20f32106f2539942e5fc365f8237031e7c95c99] ...
	I0317 11:19:09.027100  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b219bca090425646e62ca112c20f32106f2539942e5fc365f8237031e7c95c99"
	I0317 11:19:09.067711  326404 logs.go:123] Gathering logs for kube-proxy [b0f53335c4a2ea5f2da7fa7af13194a131decea5b6a4bb229f9e627b4e81fa0e] ...
	I0317 11:19:09.067740  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0f53335c4a2ea5f2da7fa7af13194a131decea5b6a4bb229f9e627b4e81fa0e"
	I0317 11:19:09.104571  326404 logs.go:123] Gathering logs for kube-controller-manager [330fa8830110af11165ed37adfb7690dd3a58adf52367139a00e09f674b88248] ...
	I0317 11:19:09.104610  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 330fa8830110af11165ed37adfb7690dd3a58adf52367139a00e09f674b88248"
	I0317 11:19:09.154073  326404 logs.go:123] Gathering logs for container status ...
	I0317 11:19:09.154118  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:19:09.191698  326404 logs.go:123] Gathering logs for kubelet ...
	I0317 11:19:09.191728  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:19:09.287872  326404 logs.go:123] Gathering logs for dmesg ...
	I0317 11:19:09.287906  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:19:09.307769  326404 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:19:09.307799  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:19:09.389776  326404 logs.go:123] Gathering logs for kube-apiserver [3309296ea8414c0fb74c0936fe4b0fcc6593b7bf1d784d4506a0ddd69e3ece3b] ...
	I0317 11:19:09.389811  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3309296ea8414c0fb74c0936fe4b0fcc6593b7bf1d784d4506a0ddd69e3ece3b"
	I0317 11:19:09.433309  326404 logs.go:123] Gathering logs for containerd ...
	I0317 11:19:09.433340  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:19:11.983384  326404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 11:19:11.994205  326404 api_server.go:72] duration metric: took 4m4.771834044s to wait for apiserver process to appear ...
	I0317 11:19:11.994232  326404 api_server.go:88] waiting for apiserver healthz status ...
	I0317 11:19:11.994268  326404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:19:11.994312  326404 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:19:12.025764  326404 cri.go:89] found id: "3309296ea8414c0fb74c0936fe4b0fcc6593b7bf1d784d4506a0ddd69e3ece3b"
	I0317 11:19:12.025789  326404 cri.go:89] found id: ""
	I0317 11:19:12.025797  326404 logs.go:282] 1 containers: [3309296ea8414c0fb74c0936fe4b0fcc6593b7bf1d784d4506a0ddd69e3ece3b]
	I0317 11:19:12.025845  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:19:12.029327  326404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:19:12.029382  326404 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:19:12.062307  326404 cri.go:89] found id: "9403b6e07fa0d9bff6a10a10b218514fab81e9cb4957d942fcf73e0ad8f038dd"
	I0317 11:19:12.062329  326404 cri.go:89] found id: ""
	I0317 11:19:12.062336  326404 logs.go:282] 1 containers: [9403b6e07fa0d9bff6a10a10b218514fab81e9cb4957d942fcf73e0ad8f038dd]
	I0317 11:19:12.062377  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:19:12.065879  326404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:19:12.065931  326404 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:19:12.098890  326404 cri.go:89] found id: ""
	I0317 11:19:12.098919  326404 logs.go:282] 0 containers: []
	W0317 11:19:12.098930  326404 logs.go:284] No container was found matching "coredns"
	I0317 11:19:12.098939  326404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:19:12.099007  326404 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:19:12.131229  326404 cri.go:89] found id: "b219bca090425646e62ca112c20f32106f2539942e5fc365f8237031e7c95c99"
	I0317 11:19:12.131268  326404 cri.go:89] found id: ""
	I0317 11:19:12.131278  326404 logs.go:282] 1 containers: [b219bca090425646e62ca112c20f32106f2539942e5fc365f8237031e7c95c99]
	I0317 11:19:12.131332  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:19:12.134992  326404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:19:12.135084  326404 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:19:12.167685  326404 cri.go:89] found id: "b0f53335c4a2ea5f2da7fa7af13194a131decea5b6a4bb229f9e627b4e81fa0e"
	I0317 11:19:12.167712  326404 cri.go:89] found id: ""
	I0317 11:19:12.167723  326404 logs.go:282] 1 containers: [b0f53335c4a2ea5f2da7fa7af13194a131decea5b6a4bb229f9e627b4e81fa0e]
	I0317 11:19:12.167773  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:19:12.171078  326404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:19:12.171142  326404 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:19:12.202631  326404 cri.go:89] found id: "330fa8830110af11165ed37adfb7690dd3a58adf52367139a00e09f674b88248"
	I0317 11:19:12.202654  326404 cri.go:89] found id: ""
	I0317 11:19:12.202662  326404 logs.go:282] 1 containers: [330fa8830110af11165ed37adfb7690dd3a58adf52367139a00e09f674b88248]
	I0317 11:19:12.202713  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:19:12.206217  326404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:19:12.206275  326404 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:19:12.239167  326404 cri.go:89] found id: ""
	I0317 11:19:12.239195  326404 logs.go:282] 0 containers: []
	W0317 11:19:12.239202  326404 logs.go:284] No container was found matching "kindnet"
	I0317 11:19:12.239215  326404 logs.go:123] Gathering logs for kube-proxy [b0f53335c4a2ea5f2da7fa7af13194a131decea5b6a4bb229f9e627b4e81fa0e] ...
	I0317 11:19:12.239227  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0f53335c4a2ea5f2da7fa7af13194a131decea5b6a4bb229f9e627b4e81fa0e"
	I0317 11:19:12.272963  326404 logs.go:123] Gathering logs for kubelet ...
	I0317 11:19:12.272994  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:19:12.362120  326404 logs.go:123] Gathering logs for etcd [9403b6e07fa0d9bff6a10a10b218514fab81e9cb4957d942fcf73e0ad8f038dd] ...
	I0317 11:19:12.362152  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9403b6e07fa0d9bff6a10a10b218514fab81e9cb4957d942fcf73e0ad8f038dd"
	I0317 11:19:12.403912  326404 logs.go:123] Gathering logs for kube-controller-manager [330fa8830110af11165ed37adfb7690dd3a58adf52367139a00e09f674b88248] ...
	I0317 11:19:12.403939  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 330fa8830110af11165ed37adfb7690dd3a58adf52367139a00e09f674b88248"
	I0317 11:19:12.450881  326404 logs.go:123] Gathering logs for containerd ...
	I0317 11:19:12.450919  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:19:12.499665  326404 logs.go:123] Gathering logs for container status ...
	I0317 11:19:12.499699  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:19:12.535989  326404 logs.go:123] Gathering logs for dmesg ...
	I0317 11:19:12.536019  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:19:12.557629  326404 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:19:12.557665  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:19:12.638126  326404 logs.go:123] Gathering logs for kube-apiserver [3309296ea8414c0fb74c0936fe4b0fcc6593b7bf1d784d4506a0ddd69e3ece3b] ...
	I0317 11:19:12.638163  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3309296ea8414c0fb74c0936fe4b0fcc6593b7bf1d784d4506a0ddd69e3ece3b"
	I0317 11:19:12.679521  326404 logs.go:123] Gathering logs for kube-scheduler [b219bca090425646e62ca112c20f32106f2539942e5fc365f8237031e7c95c99] ...
	I0317 11:19:12.679558  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b219bca090425646e62ca112c20f32106f2539942e5fc365f8237031e7c95c99"
	I0317 11:19:15.221986  326404 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0317 11:19:15.226935  326404 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0317 11:19:15.227984  326404 api_server.go:141] control plane version: v1.32.2
	I0317 11:19:15.228032  326404 api_server.go:131] duration metric: took 3.233792579s to wait for apiserver health ...
	I0317 11:19:15.228042  326404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 11:19:15.228083  326404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:19:15.228139  326404 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:19:15.261735  326404 cri.go:89] found id: "3309296ea8414c0fb74c0936fe4b0fcc6593b7bf1d784d4506a0ddd69e3ece3b"
	I0317 11:19:15.261763  326404 cri.go:89] found id: ""
	I0317 11:19:15.261771  326404 logs.go:282] 1 containers: [3309296ea8414c0fb74c0936fe4b0fcc6593b7bf1d784d4506a0ddd69e3ece3b]
	I0317 11:19:15.261822  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:19:15.265247  326404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:19:15.265311  326404 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:19:15.297224  326404 cri.go:89] found id: "9403b6e07fa0d9bff6a10a10b218514fab81e9cb4957d942fcf73e0ad8f038dd"
	I0317 11:19:15.297244  326404 cri.go:89] found id: ""
	I0317 11:19:15.297251  326404 logs.go:282] 1 containers: [9403b6e07fa0d9bff6a10a10b218514fab81e9cb4957d942fcf73e0ad8f038dd]
	I0317 11:19:15.297300  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:19:15.300625  326404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:19:15.300685  326404 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:19:15.333363  326404 cri.go:89] found id: ""
	I0317 11:19:15.333389  326404 logs.go:282] 0 containers: []
	W0317 11:19:15.333400  326404 logs.go:284] No container was found matching "coredns"
	I0317 11:19:15.333407  326404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:19:15.333463  326404 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:19:15.368150  326404 cri.go:89] found id: "b219bca090425646e62ca112c20f32106f2539942e5fc365f8237031e7c95c99"
	I0317 11:19:15.368173  326404 cri.go:89] found id: ""
	I0317 11:19:15.368181  326404 logs.go:282] 1 containers: [b219bca090425646e62ca112c20f32106f2539942e5fc365f8237031e7c95c99]
	I0317 11:19:15.368232  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:19:15.371794  326404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:19:15.371845  326404 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:19:15.404457  326404 cri.go:89] found id: "b0f53335c4a2ea5f2da7fa7af13194a131decea5b6a4bb229f9e627b4e81fa0e"
	I0317 11:19:15.404480  326404 cri.go:89] found id: ""
	I0317 11:19:15.404488  326404 logs.go:282] 1 containers: [b0f53335c4a2ea5f2da7fa7af13194a131decea5b6a4bb229f9e627b4e81fa0e]
	I0317 11:19:15.404544  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:19:15.408078  326404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:19:15.408133  326404 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:19:15.441988  326404 cri.go:89] found id: "330fa8830110af11165ed37adfb7690dd3a58adf52367139a00e09f674b88248"
	I0317 11:19:15.442015  326404 cri.go:89] found id: ""
	I0317 11:19:15.442024  326404 logs.go:282] 1 containers: [330fa8830110af11165ed37adfb7690dd3a58adf52367139a00e09f674b88248]
	I0317 11:19:15.442081  326404 ssh_runner.go:195] Run: which crictl
	I0317 11:19:15.445852  326404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:19:15.445914  326404 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:19:15.479843  326404 cri.go:89] found id: ""
	I0317 11:19:15.479867  326404 logs.go:282] 0 containers: []
	W0317 11:19:15.479875  326404 logs.go:284] No container was found matching "kindnet"
	I0317 11:19:15.479890  326404 logs.go:123] Gathering logs for container status ...
	I0317 11:19:15.479901  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:19:15.516466  326404 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:19:15.516497  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:19:15.596714  326404 logs.go:123] Gathering logs for etcd [9403b6e07fa0d9bff6a10a10b218514fab81e9cb4957d942fcf73e0ad8f038dd] ...
	I0317 11:19:15.596748  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9403b6e07fa0d9bff6a10a10b218514fab81e9cb4957d942fcf73e0ad8f038dd"
	I0317 11:19:15.635266  326404 logs.go:123] Gathering logs for containerd ...
	I0317 11:19:15.635296  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:19:15.684512  326404 logs.go:123] Gathering logs for kubelet ...
	I0317 11:19:15.684543  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:19:15.785669  326404 logs.go:123] Gathering logs for dmesg ...
	I0317 11:19:15.785703  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:19:15.806301  326404 logs.go:123] Gathering logs for kube-apiserver [3309296ea8414c0fb74c0936fe4b0fcc6593b7bf1d784d4506a0ddd69e3ece3b] ...
	I0317 11:19:15.806332  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3309296ea8414c0fb74c0936fe4b0fcc6593b7bf1d784d4506a0ddd69e3ece3b"
	I0317 11:19:15.848381  326404 logs.go:123] Gathering logs for kube-scheduler [b219bca090425646e62ca112c20f32106f2539942e5fc365f8237031e7c95c99] ...
	I0317 11:19:15.848410  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b219bca090425646e62ca112c20f32106f2539942e5fc365f8237031e7c95c99"
	I0317 11:19:15.889902  326404 logs.go:123] Gathering logs for kube-proxy [b0f53335c4a2ea5f2da7fa7af13194a131decea5b6a4bb229f9e627b4e81fa0e] ...
	I0317 11:19:15.889933  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0f53335c4a2ea5f2da7fa7af13194a131decea5b6a4bb229f9e627b4e81fa0e"
	I0317 11:19:15.924367  326404 logs.go:123] Gathering logs for kube-controller-manager [330fa8830110af11165ed37adfb7690dd3a58adf52367139a00e09f674b88248] ...
	I0317 11:19:15.924403  326404 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 330fa8830110af11165ed37adfb7690dd3a58adf52367139a00e09f674b88248"
	I0317 11:19:18.474065  326404 system_pods.go:59] 8 kube-system pods found
	I0317 11:19:18.474102  326404 system_pods.go:61] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:18.474108  326404 system_pods.go:61] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:19:18.474115  326404 system_pods.go:61] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:18.474120  326404 system_pods.go:61] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:19:18.474124  326404 system_pods.go:61] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:19:18.474127  326404 system_pods.go:61] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:19:18.474130  326404 system_pods.go:61] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:19:18.474133  326404 system_pods.go:61] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:19:18.474139  326404 system_pods.go:74] duration metric: took 3.246090514s to wait for pod list to return data ...
	I0317 11:19:18.474149  326404 default_sa.go:34] waiting for default service account to be created ...
	I0317 11:19:18.476311  326404 default_sa.go:45] found service account: "default"
	I0317 11:19:18.476335  326404 default_sa.go:55] duration metric: took 2.179901ms for default service account to be created ...
	I0317 11:19:18.476346  326404 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 11:19:18.478624  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:18.478655  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:18.478661  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:19:18.478668  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:18.478672  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:19:18.478677  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:19:18.478683  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:19:18.478688  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:19:18.478692  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:19:18.478714  326404 retry.go:31] will retry after 264.846865ms: missing components: kube-dns
	I0317 11:19:18.746882  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:18.746914  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:18.746920  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:19:18.746928  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:18.746932  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:19:18.746936  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:19:18.746939  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:19:18.746945  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:19:18.746948  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:19:18.746962  326404 retry.go:31] will retry after 286.588985ms: missing components: kube-dns
	I0317 11:19:19.037166  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:19.037197  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:19.037202  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:19:19.037210  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:19.037214  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:19:19.037221  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:19:19.037225  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:19:19.037228  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:19:19.037231  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:19:19.037244  326404 retry.go:31] will retry after 412.743326ms: missing components: kube-dns
	I0317 11:19:19.453942  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:19.453971  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:19.453976  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:19:19.453984  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:19.453988  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:19:19.453994  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:19:19.453999  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:19:19.454003  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:19:19.454008  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:19:19.454024  326404 retry.go:31] will retry after 605.950222ms: missing components: kube-dns
	I0317 11:19:20.064043  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:20.064075  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:20.064080  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:19:20.064088  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:20.064098  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:19:20.064105  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:19:20.064109  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:19:20.064115  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:19:20.064123  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:19:20.064141  326404 retry.go:31] will retry after 570.932354ms: missing components: kube-dns
	I0317 11:19:20.638718  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:20.638751  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:20.638758  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:19:20.638767  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:20.638771  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:19:20.638776  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:19:20.638781  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:19:20.638787  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:19:20.638792  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:19:20.638808  326404 retry.go:31] will retry after 574.125286ms: missing components: kube-dns
	I0317 11:19:21.216481  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:21.216513  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:21.216519  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:19:21.216526  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:21.216532  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:19:21.216538  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:19:21.216543  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:19:21.216548  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:19:21.216558  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:19:21.216580  326404 retry.go:31] will retry after 864.260132ms: missing components: kube-dns
	I0317 11:19:22.084155  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:22.084192  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:22.084201  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:19:22.084217  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:22.084223  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:19:22.084266  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:19:22.084280  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:19:22.084286  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:19:22.084294  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:19:22.084315  326404 retry.go:31] will retry after 967.641399ms: missing components: kube-dns
	I0317 11:19:23.055895  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:23.055926  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:23.055933  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:19:23.055941  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:23.055944  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:19:23.055948  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:19:23.055951  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:19:23.055955  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:19:23.055959  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:19:23.055972  326404 retry.go:31] will retry after 1.508011764s: missing components: kube-dns
	I0317 11:19:24.567547  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:24.567579  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:24.567585  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:19:24.567592  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:24.567598  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:19:24.567602  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:19:24.567605  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:19:24.567608  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:19:24.567611  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:19:24.567625  326404 retry.go:31] will retry after 2.26873298s: missing components: kube-dns
	I0317 11:19:26.840051  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:26.840083  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:26.840089  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:19:26.840097  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:26.840100  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:19:26.840105  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:19:26.840108  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:19:26.840113  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:19:26.840118  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:19:26.840134  326404 retry.go:31] will retry after 2.49517487s: missing components: kube-dns
	I0317 11:19:29.339601  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:29.339633  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:29.339639  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:19:29.339646  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:29.339653  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:19:29.339657  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:19:29.339660  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:19:29.339663  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:19:29.339666  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:19:29.339679  326404 retry.go:31] will retry after 3.310136008s: missing components: kube-dns
	I0317 11:19:32.653414  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:32.653447  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:32.653452  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:19:32.653460  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:32.653463  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:19:32.653467  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:19:32.653471  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:19:32.653474  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:19:32.653477  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:19:32.653494  326404 retry.go:31] will retry after 3.990128831s: missing components: kube-dns
	I0317 11:19:36.650671  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:36.650704  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:36.650709  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:19:36.650716  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:36.650720  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:19:36.650725  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:19:36.650727  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:19:36.650731  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:19:36.650733  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:19:36.650747  326404 retry.go:31] will retry after 4.651702764s: missing components: kube-dns
	I0317 11:19:41.308911  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:41.308944  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:41.308949  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:19:41.308957  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:41.308961  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:19:41.308966  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:19:41.308969  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:19:41.308972  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:19:41.308975  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:19:41.308990  326404 retry.go:31] will retry after 5.506550075s: missing components: kube-dns
	I0317 11:19:46.822576  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:46.822609  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:46.822615  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:19:46.822622  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:46.822626  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:19:46.822630  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:19:46.822634  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:19:46.822638  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:19:46.822641  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:19:46.822653  326404 retry.go:31] will retry after 5.401126495s: missing components: kube-dns
	I0317 11:19:52.229684  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:19:52.229717  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:19:52.229723  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:19:52.229731  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:19:52.229734  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:19:52.229739  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:19:52.229742  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:19:52.229745  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:19:52.229748  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:19:52.229762  326404 retry.go:31] will retry after 10.479575369s: missing components: kube-dns
	I0317 11:20:02.713022  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:02.713054  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:02.713060  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:20:02.713068  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:02.713071  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:20:02.713076  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:20:02.713086  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:20:02.713090  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:20:02.713093  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:20:02.713107  326404 retry.go:31] will retry after 12.130380992s: missing components: kube-dns
	I0317 11:20:14.847118  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:14.847156  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:14.847163  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:20:14.847172  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:14.847176  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:20:14.847181  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:20:14.847184  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:20:14.847187  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:20:14.847194  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:20:14.847208  326404 retry.go:31] will retry after 10.791921859s: missing components: kube-dns
	I0317 11:20:25.643642  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:25.643677  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:25.643687  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:20:25.643701  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:25.643706  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:20:25.643713  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:20:25.643718  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:20:25.643723  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:20:25.643727  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:20:25.643744  326404 retry.go:31] will retry after 15.233092286s: missing components: kube-dns
	I0317 11:20:40.881134  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:40.881166  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:40.881172  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:20:40.881180  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:40.881183  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:20:40.881187  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:20:40.881190  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:20:40.881194  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:20:40.881197  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:20:40.881210  326404 retry.go:31] will retry after 23.951072137s: missing components: kube-dns
	I0317 11:21:04.837935  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:21:04.837975  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:21:04.837986  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:21:04.837998  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:21:04.838004  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:21:04.838010  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:21:04.838016  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:21:04.838020  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:21:04.838025  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:21:04.838044  326404 retry.go:31] will retry after 29.604408571s: missing components: kube-dns
	I0317 11:21:34.446564  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:21:34.446602  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:21:34.446609  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:21:34.446620  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:21:34.446625  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:21:34.446633  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:21:34.446637  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:21:34.446644  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:21:34.446649  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:21:34.446672  326404 retry.go:31] will retry after 39.340349632s: missing components: kube-dns
	I0317 11:22:13.791135  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:22:13.791174  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:22:13.791182  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:22:13.791189  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:22:13.791193  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:22:13.791198  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:22:13.791201  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:22:13.791204  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:22:13.791207  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:22:13.791221  326404 retry.go:31] will retry after 37.076286109s: missing components: kube-dns
	I0317 11:22:50.872276  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:22:50.872306  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:22:50.872312  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:22:50.872319  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:22:50.872323  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:22:50.872329  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:22:50.872332  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:22:50.872336  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:22:50.872339  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:22:50.872352  326404 retry.go:31] will retry after 59.664508979s: missing components: kube-dns
	I0317 11:23:50.542962  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:23:50.543000  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:23:50.543007  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:23:50.543017  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:23:50.543021  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:23:50.543027  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:23:50.543030  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:23:50.543034  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:23:50.543037  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:23:50.543062  326404 retry.go:31] will retry after 54.915772165s: missing components: kube-dns
	I0317 11:24:45.462816  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:45.462854  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:45.462861  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:24:45.462869  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:45.462873  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:24:45.462878  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:24:45.462881  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:24:45.462885  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:24:45.462888  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:24:45.464780  326404 out.go:201] 
	W0317 11:24:45.466250  326404 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W0317 11:24:45.466269  326404 out.go:270] * 
	* 
	W0317 11:24:45.467129  326404 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0317 11:24:45.468845  326404 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p no-preload-189670 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-189670
helpers_test.go:235: (dbg) docker inspect no-preload-189670:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f837a316a513ce7bf915b11319cb1e11d415f687944693dfc37dbd0bb53e29d2",
	        "Created": "2025-03-17T11:14:38.534882305Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 326909,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-03-17T11:14:38.568399411Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/f837a316a513ce7bf915b11319cb1e11d415f687944693dfc37dbd0bb53e29d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f837a316a513ce7bf915b11319cb1e11d415f687944693dfc37dbd0bb53e29d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/f837a316a513ce7bf915b11319cb1e11d415f687944693dfc37dbd0bb53e29d2/hosts",
	        "LogPath": "/var/lib/docker/containers/f837a316a513ce7bf915b11319cb1e11d415f687944693dfc37dbd0bb53e29d2/f837a316a513ce7bf915b11319cb1e11d415f687944693dfc37dbd0bb53e29d2-json.log",
	        "Name": "/no-preload-189670",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-189670:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-189670",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f837a316a513ce7bf915b11319cb1e11d415f687944693dfc37dbd0bb53e29d2",
	                "LowerDir": "/var/lib/docker/overlay2/f8b31a160f8304b0adaf8d07db75f30db851cdf5765f5769c39d9441eef9253e-init/diff:/var/lib/docker/overlay2/c513cb32e4b42c4b2e1258d7197e5cd39dcbb3306943490e9747416948e6aaf6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f8b31a160f8304b0adaf8d07db75f30db851cdf5765f5769c39d9441eef9253e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f8b31a160f8304b0adaf8d07db75f30db851cdf5765f5769c39d9441eef9253e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f8b31a160f8304b0adaf8d07db75f30db851cdf5765f5769c39d9441eef9253e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-189670",
	                "Source": "/var/lib/docker/volumes/no-preload-189670/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-189670",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-189670",
	                "name.minikube.sigs.k8s.io": "no-preload-189670",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6ebbeecb937746d7005d1918d4cfc51c6f1def3d401c9fe17955a9d32a9a01c6",
	            "SandboxKey": "/var/run/docker/netns/6ebbeecb9377",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-189670": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:09:26:a7:4d:0d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c315322fd6f958b6200e84a20a39dfa9dc1fdc1987e59d002da197a1755e0d9f",
	                    "EndpointID": "259b4e4cc6a5fa118757d12ef9dfa14f9c70780247757a4849c13f79e3b74d17",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-189670",
	                        "f837a316a513"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-189670 -n no-preload-189670
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-189670 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:19 UTC | 17 Mar 25 11:19 UTC |
	|         | systemctl status kubelet --all                       |                              |         |         |                     |                     |
	|         | --full --no-pager                                    |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:19 UTC | 17 Mar 25 11:19 UTC |
	|         | systemctl cat kubelet                                |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:19 UTC | 17 Mar 25 11:19 UTC |
	|         | journalctl -xeu kubelet --all                        |                              |         |         |                     |                     |
	|         | --full --no-pager                                    |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo cat                           | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo cat                           | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC |                     |
	|         | systemctl status docker --all                        |                              |         |         |                     |                     |
	|         | --full --no-pager                                    |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | systemctl cat docker                                 |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo cat                           | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC |                     |
	|         | /etc/docker/daemon.json                              |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo docker                        | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC |                     |
	|         | system info                                          |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC |                     |
	|         | systemctl status cri-docker                          |                              |         |         |                     |                     |
	|         | --all --full --no-pager                              |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | systemctl cat cri-docker                             |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo cat                           | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo cat                           | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | cri-dockerd --version                                |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | systemctl status containerd                          |                              |         |         |                     |                     |
	|         | --all --full --no-pager                              |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | systemctl cat containerd                             |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo cat                           | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | /lib/systemd/system/containerd.service               |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo cat                           | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | /etc/containerd/config.toml                          |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | containerd config dump                               |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC |                     |
	|         | systemctl status crio --all                          |                              |         |         |                     |                     |
	|         | --full --no-pager                                    |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo                               | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | systemctl cat crio --no-pager                        |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo find                          | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                              |         |         |                     |                     |
	| ssh     | -p kindnet-236437 sudo crio                          | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	|         | config                                               |                              |         |         |                     |                     |
	| delete  | -p kindnet-236437                                    | kindnet-236437               | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC | 17 Mar 25 11:20 UTC |
	| start   | -p                                                   | default-k8s-diff-port-627203 | jenkins | v1.35.0 | 17 Mar 25 11:20 UTC |                     |
	|         | default-k8s-diff-port-627203                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                       |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                         |                              |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 11:20:09
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 11:20:09.951775  341496 out.go:345] Setting OutFile to fd 1 ...
	I0317 11:20:09.951911  341496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:20:09.951918  341496 out.go:358] Setting ErrFile to fd 2...
	I0317 11:20:09.951924  341496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:20:09.952147  341496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
	I0317 11:20:09.952741  341496 out.go:352] Setting JSON to false
	I0317 11:20:09.954025  341496 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3703,"bootTime":1742206707,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 11:20:09.954091  341496 start.go:139] virtualization: kvm guest
	I0317 11:20:09.956439  341496 out.go:177] * [default-k8s-diff-port-627203] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 11:20:09.957897  341496 out.go:177]   - MINIKUBE_LOCATION=20535
	I0317 11:20:09.957990  341496 notify.go:220] Checking for updates...
	I0317 11:20:09.960721  341496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 11:20:09.962333  341496 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 11:20:09.963810  341496 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	I0317 11:20:09.965290  341496 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 11:20:09.966759  341496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 11:20:09.968637  341496 config.go:182] Loaded profile config "calico-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:20:09.968800  341496 config.go:182] Loaded profile config "no-preload-189670": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:20:09.968922  341496 config.go:182] Loaded profile config "old-k8s-version-702762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0317 11:20:09.969134  341496 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 11:20:09.994726  341496 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 11:20:09.994957  341496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 11:20:10.047464  341496 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-03-17 11:20:10.037717036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 11:20:10.047559  341496 docker.go:318] overlay module found
	I0317 11:20:10.049461  341496 out.go:177] * Using the docker driver based on user configuration
	I0317 11:20:10.050764  341496 start.go:297] selected driver: docker
	I0317 11:20:10.050780  341496 start.go:901] validating driver "docker" against <nil>
	I0317 11:20:10.050795  341496 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 11:20:10.051718  341496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 11:20:10.105955  341496 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-03-17 11:20:10.096342154 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 11:20:10.106128  341496 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 11:20:10.106353  341496 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 11:20:10.108473  341496 out.go:177] * Using Docker driver with root privileges
	I0317 11:20:10.109937  341496 cni.go:84] Creating CNI manager for ""
	I0317 11:20:10.110100  341496 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 11:20:10.110117  341496 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0317 11:20:10.110220  341496 start.go:340] cluster config:
	{Name:default-k8s-diff-port-627203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-627203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:20:10.111829  341496 out.go:177] * Starting "default-k8s-diff-port-627203" primary control-plane node in "default-k8s-diff-port-627203" cluster
	I0317 11:20:10.113031  341496 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0317 11:20:10.114478  341496 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0317 11:20:10.115992  341496 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:20:10.116043  341496 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0317 11:20:10.116053  341496 cache.go:56] Caching tarball of preloaded images
	I0317 11:20:10.116120  341496 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0317 11:20:10.116149  341496 preload.go:172] Found /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 11:20:10.116162  341496 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0317 11:20:10.116325  341496 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/config.json ...
	I0317 11:20:10.116351  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/config.json: {Name:mk848192ef1b40ae1077b4c3a36047479a0034b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:10.138687  341496 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0317 11:20:10.138707  341496 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0317 11:20:10.138729  341496 cache.go:230] Successfully downloaded all kic artifacts
	I0317 11:20:10.138768  341496 start.go:360] acquireMachinesLock for default-k8s-diff-port-627203: {Name:mkcbff1d84866f612a979fbe06c726407300b170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 11:20:10.138896  341496 start.go:364] duration metric: took 104.168µs to acquireMachinesLock for "default-k8s-diff-port-627203"
	I0317 11:20:10.138925  341496 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-627203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-627203 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 11:20:10.139000  341496 start.go:125] createHost starting for "" (driver="docker")
	I0317 11:20:10.141230  341496 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0317 11:20:10.141482  341496 start.go:159] libmachine.API.Create for "default-k8s-diff-port-627203" (driver="docker")
	I0317 11:20:10.141513  341496 client.go:168] LocalClient.Create starting
	I0317 11:20:10.141581  341496 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem
	I0317 11:20:10.141611  341496 main.go:141] libmachine: Decoding PEM data...
	I0317 11:20:10.141625  341496 main.go:141] libmachine: Parsing certificate...
	I0317 11:20:10.141678  341496 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem
	I0317 11:20:10.141696  341496 main.go:141] libmachine: Decoding PEM data...
	I0317 11:20:10.141706  341496 main.go:141] libmachine: Parsing certificate...
	I0317 11:20:10.142029  341496 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-627203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0317 11:20:10.160384  341496 cli_runner.go:211] docker network inspect default-k8s-diff-port-627203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0317 11:20:10.160474  341496 network_create.go:284] running [docker network inspect default-k8s-diff-port-627203] to gather additional debugging logs...
	I0317 11:20:10.160501  341496 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-627203
	W0317 11:20:10.178195  341496 cli_runner.go:211] docker network inspect default-k8s-diff-port-627203 returned with exit code 1
	I0317 11:20:10.178227  341496 network_create.go:287] error running [docker network inspect default-k8s-diff-port-627203]: docker network inspect default-k8s-diff-port-627203: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-627203 not found
	I0317 11:20:10.178241  341496 network_create.go:289] output of [docker network inspect default-k8s-diff-port-627203]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-627203 not found
	
	** /stderr **
	I0317 11:20:10.178338  341496 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 11:20:10.197679  341496 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6a2ef9d4bc68 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9a:4d:91:26:57:2c} reservation:<nil>}
	I0317 11:20:10.198624  341496 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-00bf62ef0133 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:c5:34:86:d6:21} reservation:<nil>}
	I0317 11:20:10.199639  341496 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-81e0001ceae7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:6a:cf:1c:79:e6} reservation:<nil>}
	I0317 11:20:10.200718  341496 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d24500}
	I0317 11:20:10.200739  341496 network_create.go:124] attempt to create docker network default-k8s-diff-port-627203 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0317 11:20:10.200784  341496 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-627203 default-k8s-diff-port-627203
	I0317 11:20:10.255439  341496 network_create.go:108] docker network default-k8s-diff-port-627203 192.168.76.0/24 created
	I0317 11:20:10.255568  341496 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-627203" container
	I0317 11:20:10.255629  341496 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0317 11:20:10.274724  341496 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-627203 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-627203 --label created_by.minikube.sigs.k8s.io=true
	I0317 11:20:10.294680  341496 oci.go:103] Successfully created a docker volume default-k8s-diff-port-627203
	I0317 11:20:10.294772  341496 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-627203-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-627203 --entrypoint /usr/bin/test -v default-k8s-diff-port-627203:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0317 11:20:10.747828  341496 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-627203
	I0317 11:20:10.747877  341496 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:20:10.747900  341496 kic.go:194] Starting extracting preloaded images to volume ...
	I0317 11:20:10.747969  341496 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-627203:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0317 11:20:14.847118  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:14.847156  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:14.847163  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:20:14.847172  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:14.847176  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:20:14.847181  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:20:14.847184  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:20:14.847187  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:20:14.847194  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:20:14.847208  326404 retry.go:31] will retry after 10.791921859s: missing components: kube-dns
	I0317 11:20:15.344266  341496 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-627203:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.596232772s)
	I0317 11:20:15.344302  341496 kic.go:203] duration metric: took 4.596396796s to extract preloaded images to volume ...
	W0317 11:20:15.344459  341496 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0317 11:20:15.344607  341496 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0317 11:20:15.397506  341496 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-627203 --name default-k8s-diff-port-627203 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-627203 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-627203 --network default-k8s-diff-port-627203 --ip 192.168.76.2 --volume default-k8s-diff-port-627203:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0317 11:20:15.665923  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Running}}
	I0317 11:20:15.686899  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:15.706866  341496 cli_runner.go:164] Run: docker exec default-k8s-diff-port-627203 stat /var/lib/dpkg/alternatives/iptables
	I0317 11:20:15.749402  341496 oci.go:144] the created container "default-k8s-diff-port-627203" has a running status.
	I0317 11:20:15.749447  341496 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa...
	I0317 11:20:15.892302  341496 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0317 11:20:15.918468  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:15.941520  341496 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0317 11:20:15.941545  341496 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-627203 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0317 11:20:15.989310  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:16.010066  341496 machine.go:93] provisionDockerMachine start ...
	I0317 11:20:16.010194  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:16.033285  341496 main.go:141] libmachine: Using SSH client type: native
	I0317 11:20:16.033637  341496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0317 11:20:16.033665  341496 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 11:20:16.034656  341496 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46524->127.0.0.1:33103: read: connection reset by peer
	I0317 11:20:19.170824  341496 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-627203
	
	I0317 11:20:19.170859  341496 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-627203"
	I0317 11:20:19.170929  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:19.189150  341496 main.go:141] libmachine: Using SSH client type: native
	I0317 11:20:19.189434  341496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0317 11:20:19.189452  341496 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-627203 && echo "default-k8s-diff-port-627203" | sudo tee /etc/hostname
	I0317 11:20:19.334316  341496 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-627203
	
	I0317 11:20:19.334392  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:19.351482  341496 main.go:141] libmachine: Using SSH client type: native
	I0317 11:20:19.351684  341496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0317 11:20:19.351701  341496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-627203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-627203/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-627203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 11:20:19.483211  341496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 11:20:19.483289  341496 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20535-4918/.minikube CaCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20535-4918/.minikube}
	I0317 11:20:19.483331  341496 ubuntu.go:177] setting up certificates
	I0317 11:20:19.483341  341496 provision.go:84] configureAuth start
	I0317 11:20:19.483396  341496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-627203
	I0317 11:20:19.500645  341496 provision.go:143] copyHostCerts
	I0317 11:20:19.500703  341496 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem, removing ...
	I0317 11:20:19.500713  341496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem
	I0317 11:20:19.500773  341496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem (1082 bytes)
	I0317 11:20:19.500859  341496 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem, removing ...
	I0317 11:20:19.500868  341496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem
	I0317 11:20:19.500892  341496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem (1123 bytes)
	I0317 11:20:19.500946  341496 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem, removing ...
	I0317 11:20:19.500954  341496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem
	I0317 11:20:19.500979  341496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem (1679 bytes)
	I0317 11:20:19.501029  341496 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-627203 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-627203 localhost minikube]
	I0317 11:20:19.577076  341496 provision.go:177] copyRemoteCerts
	I0317 11:20:19.577143  341496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 11:20:19.577187  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:19.594134  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:19.688036  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 11:20:19.710326  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0317 11:20:19.732614  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 11:20:19.753945  341496 provision.go:87] duration metric: took 270.590449ms to configureAuth
	I0317 11:20:19.753968  341496 ubuntu.go:193] setting minikube options for container-runtime
	I0317 11:20:19.754118  341496 config.go:182] Loaded profile config "default-k8s-diff-port-627203": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:20:19.754128  341496 machine.go:96] duration metric: took 3.744035437s to provisionDockerMachine
	I0317 11:20:19.754134  341496 client.go:171] duration metric: took 9.612615756s to LocalClient.Create
	I0317 11:20:19.754154  341496 start.go:167] duration metric: took 9.612671271s to libmachine.API.Create "default-k8s-diff-port-627203"
	I0317 11:20:19.754161  341496 start.go:293] postStartSetup for "default-k8s-diff-port-627203" (driver="docker")
	I0317 11:20:19.754175  341496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 11:20:19.754215  341496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 11:20:19.754250  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:19.771203  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:19.872391  341496 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 11:20:19.875550  341496 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0317 11:20:19.875582  341496 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0317 11:20:19.875595  341496 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0317 11:20:19.875607  341496 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0317 11:20:19.875635  341496 filesync.go:126] Scanning /home/jenkins/minikube-integration/20535-4918/.minikube/addons for local assets ...
	I0317 11:20:19.875698  341496 filesync.go:126] Scanning /home/jenkins/minikube-integration/20535-4918/.minikube/files for local assets ...
	I0317 11:20:19.875804  341496 filesync.go:149] local asset: /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem -> 116902.pem in /etc/ssl/certs
	I0317 11:20:19.875917  341496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 11:20:19.883445  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem --> /etc/ssl/certs/116902.pem (1708 bytes)
	I0317 11:20:19.905732  341496 start.go:296] duration metric: took 151.558516ms for postStartSetup
	I0317 11:20:19.906060  341496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-627203
	I0317 11:20:19.925755  341496 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/config.json ...
	I0317 11:20:19.926020  341496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 11:20:19.926086  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:19.944770  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:15.751647  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:15.751680  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:15.751688  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:20:15.751699  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:15.751704  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:20:15.751711  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:20:15.751716  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:20:15.751722  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:20:15.751728  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:20:15.751750  317731 retry.go:31] will retry after 15.481083164s: missing components: kube-dns
	I0317 11:20:20.036185  341496 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0317 11:20:20.040344  341496 start.go:128] duration metric: took 9.901332366s to createHost
	I0317 11:20:20.040365  341496 start.go:83] releasing machines lock for "default-k8s-diff-port-627203", held for 9.901455126s
	I0317 11:20:20.040424  341496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-627203
	I0317 11:20:20.057945  341496 ssh_runner.go:195] Run: cat /version.json
	I0317 11:20:20.057987  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:20.058044  341496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 11:20:20.058110  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:20.077893  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:20.078299  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:20.248043  341496 ssh_runner.go:195] Run: systemctl --version
	I0317 11:20:20.252422  341496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0317 11:20:20.256698  341496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0317 11:20:20.280151  341496 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0317 11:20:20.280205  341496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 11:20:20.303739  341496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0317 11:20:20.303757  341496 start.go:495] detecting cgroup driver to use...
	I0317 11:20:20.303795  341496 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0317 11:20:20.303871  341496 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 11:20:20.314490  341496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 11:20:20.323921  341496 docker.go:217] disabling cri-docker service (if available) ...
	I0317 11:20:20.323964  341496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 11:20:20.336961  341496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 11:20:20.348981  341496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 11:20:20.427755  341496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 11:20:20.507541  341496 docker.go:233] disabling docker service ...
	I0317 11:20:20.507615  341496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 11:20:20.525433  341496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 11:20:20.536350  341496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 11:20:20.601585  341496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 11:20:20.666739  341496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 11:20:20.677294  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 11:20:20.692169  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 11:20:20.700729  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 11:20:20.709826  341496 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 11:20:20.709888  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 11:20:20.718738  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:20:20.727842  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 11:20:20.736960  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:20:20.745738  341496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 11:20:20.753974  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 11:20:20.762628  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 11:20:20.770887  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 11:20:20.779873  341496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 11:20:20.787306  341496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 11:20:20.794585  341496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:20:20.857244  341496 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 11:20:20.962615  341496 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0317 11:20:20.962696  341496 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0317 11:20:20.966342  341496 start.go:563] Will wait 60s for crictl version
	I0317 11:20:20.966394  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:20:20.969458  341496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 11:20:21.000301  341496 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.25
	RuntimeApiVersion:  v1
	I0317 11:20:21.000364  341496 ssh_runner.go:195] Run: containerd --version
	I0317 11:20:21.021585  341496 ssh_runner.go:195] Run: containerd --version
	I0317 11:20:21.045298  341496 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
	I0317 11:20:21.046823  341496 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-627203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 11:20:21.063998  341496 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0317 11:20:21.067681  341496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:20:21.078036  341496 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-627203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-627203 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 11:20:21.078155  341496 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:20:21.078215  341496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 11:20:21.110394  341496 containerd.go:627] all images are preloaded for containerd runtime.
	I0317 11:20:21.110416  341496 containerd.go:534] Images already preloaded, skipping extraction
	I0317 11:20:21.110471  341496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 11:20:21.147039  341496 containerd.go:627] all images are preloaded for containerd runtime.
	I0317 11:20:21.147059  341496 cache_images.go:84] Images are preloaded, skipping loading
	I0317 11:20:21.147072  341496 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.32.2 containerd true true} ...
	I0317 11:20:21.147182  341496 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-627203 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-627203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 11:20:21.147245  341496 ssh_runner.go:195] Run: sudo crictl info
	I0317 11:20:21.180368  341496 cni.go:84] Creating CNI manager for ""
	I0317 11:20:21.180402  341496 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 11:20:21.180417  341496 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 11:20:21.180451  341496 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-627203 NodeName:default-k8s-diff-port-627203 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 11:20:21.180598  341496 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-627203"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 11:20:21.180676  341496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 11:20:21.189167  341496 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 11:20:21.189222  341496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 11:20:21.197091  341496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0317 11:20:21.212836  341496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 11:20:21.228613  341496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2318 bytes)
	I0317 11:20:21.244235  341496 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0317 11:20:21.247449  341496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:20:21.257029  341496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:20:21.331412  341496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:20:21.344658  341496 certs.go:68] Setting up /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203 for IP: 192.168.76.2
	I0317 11:20:21.344685  341496 certs.go:194] generating shared ca certs ...
	I0317 11:20:21.344706  341496 certs.go:226] acquiring lock for ca certs: {Name:mkf58624c63680e02907d28348d45986283847c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.344852  341496 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20535-4918/.minikube/ca.key
	I0317 11:20:21.344888  341496 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.key
	I0317 11:20:21.344900  341496 certs.go:256] generating profile certs ...
	I0317 11:20:21.344967  341496 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.key
	I0317 11:20:21.344994  341496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.crt with IP's: []
	I0317 11:20:21.433063  341496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.crt ...
	I0317 11:20:21.433090  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.crt: {Name:mk081d27f47a46e83ef42cd529ab90efa4a42374 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.433242  341496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.key ...
	I0317 11:20:21.433256  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.key: {Name:mk3ff3f97f5b6d17c55106167353f358e3be7b97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.433330  341496 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key.0ed8e3f2
	I0317 11:20:21.433345  341496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt.0ed8e3f2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0317 11:20:21.695664  341496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt.0ed8e3f2 ...
	I0317 11:20:21.695695  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt.0ed8e3f2: {Name:mk7442ef755923abf17c70bd38ce4a38e38e6b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.695884  341496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key.0ed8e3f2 ...
	I0317 11:20:21.695904  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key.0ed8e3f2: {Name:mke8376d0935665b80188d48fe43b8e5b8ff6f80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.695977  341496 certs.go:381] copying /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt.0ed8e3f2 -> /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt
	I0317 11:20:21.696069  341496 certs.go:385] copying /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key.0ed8e3f2 -> /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key
	I0317 11:20:21.696166  341496 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.key
	I0317 11:20:21.696189  341496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.crt with IP's: []
	I0317 11:20:21.791034  341496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.crt ...
	I0317 11:20:21.791067  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.crt: {Name:mk96f99fc08821936606db2cdde9f87f27d42fb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.791243  341496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.key ...
	I0317 11:20:21.791284  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.key: {Name:mk0e9ec0c366cd0af025f90a833ba1e60d673556 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.791492  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690.pem (1338 bytes)
	W0317 11:20:21.791525  341496 certs.go:480] ignoring /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690_empty.pem, impossibly tiny 0 bytes
	I0317 11:20:21.791536  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 11:20:21.791559  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem (1082 bytes)
	I0317 11:20:21.791585  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem (1123 bytes)
	I0317 11:20:21.791609  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem (1679 bytes)
	I0317 11:20:21.791644  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem (1708 bytes)
	I0317 11:20:21.792251  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 11:20:21.814842  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 11:20:21.836814  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 11:20:21.860128  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 11:20:21.881562  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0317 11:20:21.903421  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 11:20:21.928625  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 11:20:21.951436  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0317 11:20:21.974719  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690.pem --> /usr/share/ca-certificates/11690.pem (1338 bytes)
	I0317 11:20:21.998103  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem --> /usr/share/ca-certificates/116902.pem (1708 bytes)
	I0317 11:20:22.019954  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 11:20:22.042505  341496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 11:20:22.058914  341496 ssh_runner.go:195] Run: openssl version
	I0317 11:20:22.064354  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116902.pem && ln -fs /usr/share/ca-certificates/116902.pem /etc/ssl/certs/116902.pem"
	I0317 11:20:22.073425  341496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116902.pem
	I0317 11:20:22.076909  341496 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:32 /usr/share/ca-certificates/116902.pem
	I0317 11:20:22.076964  341496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116902.pem
	I0317 11:20:22.084480  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116902.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 11:20:22.094200  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 11:20:22.103020  341496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:20:22.106304  341496 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:20:22.106414  341496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:20:22.112757  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 11:20:22.121663  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11690.pem && ln -fs /usr/share/ca-certificates/11690.pem /etc/ssl/certs/11690.pem"
	I0317 11:20:22.130150  341496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11690.pem
	I0317 11:20:22.133632  341496 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:32 /usr/share/ca-certificates/11690.pem
	I0317 11:20:22.133685  341496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11690.pem
	I0317 11:20:22.140348  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11690.pem /etc/ssl/certs/51391683.0"
	I0317 11:20:22.148875  341496 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 11:20:22.151896  341496 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 11:20:22.151951  341496 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-627203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-627203 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:20:22.152020  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0317 11:20:22.152054  341496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 11:20:22.184980  341496 cri.go:89] found id: ""
	I0317 11:20:22.185043  341496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 11:20:22.193505  341496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 11:20:22.201849  341496 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0317 11:20:22.201930  341496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 11:20:22.210091  341496 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 11:20:22.210113  341496 kubeadm.go:157] found existing configuration files:
	
	I0317 11:20:22.210163  341496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0317 11:20:22.218192  341496 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 11:20:22.218255  341496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 11:20:22.226657  341496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0317 11:20:22.239638  341496 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 11:20:22.239694  341496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 11:20:22.247616  341496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0317 11:20:22.256388  341496 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 11:20:22.256448  341496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 11:20:22.264706  341496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0317 11:20:22.272518  341496 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 11:20:22.272585  341496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 11:20:22.281056  341496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0317 11:20:22.333597  341496 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0317 11:20:22.333966  341496 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0317 11:20:22.389918  341496 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 11:20:25.643642  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:25.643677  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:25.643687  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:20:25.643701  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:25.643706  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:20:25.643713  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:20:25.643718  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:20:25.643723  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:20:25.643727  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:20:25.643744  326404 retry.go:31] will retry after 15.233092286s: missing components: kube-dns
	I0317 11:20:31.555534  341496 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 11:20:31.555624  341496 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 11:20:31.555753  341496 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0317 11:20:31.555806  341496 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0317 11:20:31.555879  341496 kubeadm.go:310] OS: Linux
	I0317 11:20:31.555963  341496 kubeadm.go:310] CGROUPS_CPU: enabled
	I0317 11:20:31.556040  341496 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0317 11:20:31.556116  341496 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0317 11:20:31.556186  341496 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0317 11:20:31.556263  341496 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0317 11:20:31.556356  341496 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0317 11:20:31.556406  341496 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0317 11:20:31.556449  341496 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0317 11:20:31.556490  341496 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0317 11:20:31.556550  341496 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 11:20:31.556678  341496 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 11:20:31.556827  341496 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 11:20:31.556924  341496 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 11:20:31.558772  341496 out.go:235]   - Generating certificates and keys ...
	I0317 11:20:31.558886  341496 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 11:20:31.558955  341496 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 11:20:31.559017  341496 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 11:20:31.559068  341496 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 11:20:31.559146  341496 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 11:20:31.559215  341496 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 11:20:31.559342  341496 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 11:20:31.559507  341496 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-627203 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0317 11:20:31.559566  341496 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 11:20:31.559687  341496 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-627203 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0317 11:20:31.559743  341496 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 11:20:31.559836  341496 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 11:20:31.559913  341496 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 11:20:31.560004  341496 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 11:20:31.560089  341496 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 11:20:31.560182  341496 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 11:20:31.560271  341496 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 11:20:31.560363  341496 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 11:20:31.560437  341496 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 11:20:31.560547  341496 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 11:20:31.560619  341496 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 11:20:31.561976  341496 out.go:235]   - Booting up control plane ...
	I0317 11:20:31.562075  341496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 11:20:31.562146  341496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 11:20:31.562203  341496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 11:20:31.562291  341496 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 11:20:31.562370  341496 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 11:20:31.562404  341496 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 11:20:31.562526  341496 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 11:20:31.562631  341496 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 11:20:31.562686  341496 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.585498ms
	I0317 11:20:31.562756  341496 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 11:20:31.562810  341496 kubeadm.go:310] [api-check] The API server is healthy after 5.001640951s
	I0317 11:20:31.562926  341496 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 11:20:31.563043  341496 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 11:20:31.563096  341496 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 11:20:31.563308  341496 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-627203 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 11:20:31.563370  341496 kubeadm.go:310] [bootstrap-token] Using token: cynw4v.vidupn9uwbpkry9q
	I0317 11:20:31.565344  341496 out.go:235]   - Configuring RBAC rules ...
	I0317 11:20:31.565438  341496 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 11:20:31.565516  341496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 11:20:31.565649  341496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 11:20:31.565854  341496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 11:20:31.565999  341496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 11:20:31.566087  341496 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 11:20:31.566197  341496 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 11:20:31.566250  341496 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 11:20:31.566293  341496 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 11:20:31.566298  341496 kubeadm.go:310] 
	I0317 11:20:31.566370  341496 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 11:20:31.566379  341496 kubeadm.go:310] 
	I0317 11:20:31.566477  341496 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 11:20:31.566484  341496 kubeadm.go:310] 
	I0317 11:20:31.566505  341496 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 11:20:31.566555  341496 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 11:20:31.566599  341496 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 11:20:31.566605  341496 kubeadm.go:310] 
	I0317 11:20:31.566649  341496 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 11:20:31.566655  341496 kubeadm.go:310] 
	I0317 11:20:31.566724  341496 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 11:20:31.566735  341496 kubeadm.go:310] 
	I0317 11:20:31.566814  341496 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 11:20:31.566915  341496 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 11:20:31.567023  341496 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 11:20:31.567040  341496 kubeadm.go:310] 
	I0317 11:20:31.567157  341496 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 11:20:31.567285  341496 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 11:20:31.567299  341496 kubeadm.go:310] 
	I0317 11:20:31.567400  341496 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cynw4v.vidupn9uwbpkry9q \
	I0317 11:20:31.567505  341496 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fbbd8e832ea7aa08371d4fcc88b71c8e29c98bed7a9a4feed9bf5043f7b52578 \
	I0317 11:20:31.567540  341496 kubeadm.go:310] 	--control-plane 
	I0317 11:20:31.567550  341496 kubeadm.go:310] 
	I0317 11:20:31.567675  341496 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 11:20:31.567685  341496 kubeadm.go:310] 
	I0317 11:20:31.567820  341496 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cynw4v.vidupn9uwbpkry9q \
	I0317 11:20:31.567990  341496 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fbbd8e832ea7aa08371d4fcc88b71c8e29c98bed7a9a4feed9bf5043f7b52578 
	I0317 11:20:31.568005  341496 cni.go:84] Creating CNI manager for ""
	I0317 11:20:31.568014  341496 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 11:20:31.570308  341496 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0317 11:20:31.571654  341496 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0317 11:20:31.575330  341496 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0317 11:20:31.575346  341496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0317 11:20:31.592203  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0317 11:20:31.796107  341496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 11:20:31.796185  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:31.796227  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-627203 minikube.k8s.io/updated_at=2025_03_17T11_20_31_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76 minikube.k8s.io/name=default-k8s-diff-port-627203 minikube.k8s.io/primary=true
	I0317 11:20:31.913761  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:31.913762  341496 ops.go:34] apiserver oom_adj: -16
	I0317 11:20:32.414495  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:32.914861  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:33.414784  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:33.914144  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:34.414705  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:34.913915  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:35.414122  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:35.487512  341496 kubeadm.go:1113] duration metric: took 3.691382531s to wait for elevateKubeSystemPrivileges
	I0317 11:20:35.487556  341496 kubeadm.go:394] duration metric: took 13.335608972s to StartCluster
	I0317 11:20:35.487576  341496 settings.go:142] acquiring lock: {Name:mk2a57d556efff40ccd4336229d7a78216b861f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:35.487640  341496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 11:20:35.489566  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/kubeconfig: {Name:mk686b9f6159ab958672b945ae0aa5a9c96e9ecc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:35.489774  341496 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 11:20:35.489881  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 11:20:35.489943  341496 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 11:20:35.490029  341496 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-627203"
	I0317 11:20:35.490056  341496 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-627203"
	I0317 11:20:35.490076  341496 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-627203"
	I0317 11:20:35.490078  341496 config.go:182] Loaded profile config "default-k8s-diff-port-627203": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:20:35.490098  341496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-627203"
	I0317 11:20:35.490113  341496 host.go:66] Checking if "default-k8s-diff-port-627203" exists ...
	I0317 11:20:35.490455  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:35.490636  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:35.491384  341496 out.go:177] * Verifying Kubernetes components...
	I0317 11:20:35.492644  341496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:20:35.518758  341496 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-627203"
	I0317 11:20:35.518803  341496 host.go:66] Checking if "default-k8s-diff-port-627203" exists ...
	I0317 11:20:35.519164  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:35.520182  341496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:20:31.236896  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:31.236935  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:31.236944  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:20:31.236959  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:31.236964  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:20:31.236971  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:20:31.236976  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:20:31.236984  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:20:31.236990  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:20:31.237009  317731 retry.go:31] will retry after 19.261545466s: missing components: kube-dns
	I0317 11:20:35.521412  341496 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 11:20:35.521431  341496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 11:20:35.521480  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:35.546610  341496 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 11:20:35.546635  341496 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 11:20:35.546679  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:35.549777  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:35.572702  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:35.624663  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 11:20:35.637144  341496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:20:35.724225  341496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 11:20:35.825754  341496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 11:20:36.141080  341496 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0317 11:20:36.142459  341496 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-627203" to be "Ready" ...
	I0317 11:20:36.207177  341496 node_ready.go:49] node "default-k8s-diff-port-627203" has status "Ready":"True"
	I0317 11:20:36.207215  341496 node_ready.go:38] duration metric: took 64.732247ms for node "default-k8s-diff-port-627203" to be "Ready" ...
	I0317 11:20:36.207231  341496 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:20:36.211865  341496 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace to be "Ready" ...
	I0317 11:20:36.619880  341496 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0317 11:20:36.621467  341496 addons.go:514] duration metric: took 1.131519409s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0317 11:20:36.646479  341496 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-627203" context rescaled to 1 replicas
	I0317 11:20:38.217170  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:40.881134  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:40.881166  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:40.881172  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:20:40.881180  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:40.881183  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:20:40.881187  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:20:40.881190  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:20:40.881194  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:20:40.881197  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:20:40.881210  326404 retry.go:31] will retry after 23.951072137s: missing components: kube-dns
	I0317 11:20:40.524557  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:20:40.524600  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:20:40.524614  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:20:40.524624  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:40.524632  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:20:40.524640  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:20:40.524649  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:20:40.524658  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:20:40.524664  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:20:40.524673  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:20:40.524693  271403 retry.go:31] will retry after 1m5.301611864s: missing components: kube-dns
	I0317 11:20:40.217729  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:42.716852  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:44.717026  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:46.717095  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:49.217150  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:50.502591  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:50.502629  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:50.502636  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:20:50.502647  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:50.502652  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:20:50.502658  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:20:50.502664  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:20:50.502670  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:20:50.502676  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:20:50.502696  317731 retry.go:31] will retry after 27.654906766s: missing components: kube-dns
	I0317 11:20:51.716947  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:54.217035  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:56.217405  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:58.716755  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:00.717212  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:03.216840  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:04.837935  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:21:04.837975  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:21:04.837986  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:21:04.837998  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:21:04.838004  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:21:04.838010  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:21:04.838016  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:21:04.838020  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:21:04.838025  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:21:04.838044  326404 retry.go:31] will retry after 29.604408571s: missing components: kube-dns
	I0317 11:21:05.716737  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:07.717290  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:10.216367  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:12.217359  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:14.717254  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:17.216553  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:19.216868  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:18.162882  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:21:18.162924  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:21:18.162931  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:21:18.162943  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:21:18.162950  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:21:18.162957  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:21:18.162963  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:21:18.162969  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:21:18.162978  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:21:18.162995  317731 retry.go:31] will retry after 25.805377541s: missing components: kube-dns
	I0317 11:21:21.717204  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:23.717446  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:26.217593  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:28.716779  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:30.716838  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:32.717482  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:34.717607  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:34.446564  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:21:34.446602  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:21:34.446609  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:21:34.446620  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:21:34.446625  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:21:34.446633  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:21:34.446637  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:21:34.446644  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:21:34.446649  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:21:34.446672  326404 retry.go:31] will retry after 39.340349632s: missing components: kube-dns
	I0317 11:21:37.217012  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:39.720107  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:42.217009  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:44.717014  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:43.975001  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:21:43.975039  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:21:43.975046  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:21:43.975057  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:21:43.975063  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:21:43.975070  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:21:43.975075  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:21:43.975082  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:21:43.975087  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:21:43.975105  317731 retry.go:31] will retry after 50.299309092s: missing components: kube-dns
	I0317 11:21:45.830506  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:21:45.830550  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:21:45.830565  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:21:45.830575  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:21:45.830582  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:21:45.830589  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:21:45.830596  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:21:45.830602  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:21:45.830612  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:21:45.830619  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:21:45.830639  271403 retry.go:31] will retry after 1m6.469274108s: missing components: kube-dns
	I0317 11:21:47.216852  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:49.716980  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:51.717159  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:53.717199  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:56.216966  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:58.716666  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:00.716842  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:03.216854  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:05.716421  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:07.717473  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:09.717607  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:12.216801  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:14.217528  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:13.791135  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:22:13.791174  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:22:13.791182  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:22:13.791189  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:22:13.791193  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:22:13.791198  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:22:13.791201  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:22:13.791204  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:22:13.791207  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:22:13.791221  326404 retry.go:31] will retry after 37.076286109s: missing components: kube-dns
	I0317 11:22:16.716908  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:18.717190  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:21.216745  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:23.717172  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:25.717597  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:28.216363  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:30.216624  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:32.216877  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:34.716824  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:34.281779  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:22:34.281815  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:22:34.281822  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:22:34.281830  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:22:34.281834  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:22:34.281840  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:22:34.281844  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:22:34.281848  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:22:34.281851  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:22:34.281866  317731 retry.go:31] will retry after 1m2.657088736s: missing components: kube-dns
	I0317 11:22:37.217665  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:39.716973  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:41.717247  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:44.216939  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:46.716529  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:48.716994  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:50.872276  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:22:50.872306  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:22:50.872312  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:22:50.872319  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:22:50.872323  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:22:50.872329  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:22:50.872332  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:22:50.872336  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:22:50.872339  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:22:50.872352  326404 retry.go:31] will retry after 59.664508979s: missing components: kube-dns
	I0317 11:22:52.304439  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:22:52.304483  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:22:52.304503  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:22:52.304514  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:22:52.304522  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:22:52.304529  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:22:52.304538  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:22:52.304546  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:22:52.304553  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:22:52.304559  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:22:52.304577  271403 retry.go:31] will retry after 57.75468648s: missing components: kube-dns
	I0317 11:22:51.216816  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:53.216970  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:55.716609  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:57.717480  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:00.217407  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:02.716365  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:04.716438  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:06.716987  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:09.216843  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:11.217200  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:13.218595  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:15.717196  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:18.216354  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:20.217860  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:22.716518  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:24.717213  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:27.216933  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:29.717016  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:32.216483  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:34.217018  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:36.716769  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:38.717020  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:36.943443  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:23:36.943481  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:23:36.943487  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:23:36.943497  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:23:36.943503  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:23:36.943509  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:23:36.943512  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:23:36.943516  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:23:36.943520  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:23:36.943538  317731 retry.go:31] will retry after 53.125754107s: missing components: kube-dns
	I0317 11:23:40.717051  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:42.717588  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:45.216717  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:47.717009  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:49.718582  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:50.542962  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:23:50.543000  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:23:50.543007  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:23:50.543017  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:23:50.543021  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:23:50.543027  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:23:50.543030  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:23:50.543034  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:23:50.543037  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:23:50.543062  326404 retry.go:31] will retry after 54.915772165s: missing components: kube-dns
	I0317 11:23:50.063088  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:23:50.063127  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:23:50.063136  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:23:50.063153  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:23:50.063159  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:23:50.063166  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:23:50.063169  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:23:50.063174  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:23:50.063177  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:23:50.063180  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:23:50.063197  271403 retry.go:31] will retry after 47.200040689s: missing components: kube-dns
	I0317 11:23:52.216980  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:54.217886  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:56.717131  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:59.217483  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:01.717240  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:04.216952  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:06.217363  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:08.717047  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:11.216816  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:13.217215  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:15.217429  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:17.717023  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:20.216953  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:22.216989  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:24.716953  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:27.217304  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:29.717972  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:30.074980  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:30.075015  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:30.075021  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:24:30.075028  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:30.075032  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:24:30.075036  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:24:30.075040  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:24:30.075046  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:24:30.075049  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:24:30.077099  317731 out.go:201] 
	W0317 11:24:30.078365  317731 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W0317 11:24:30.078387  317731 out.go:270] * 
	W0317 11:24:30.079214  317731 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0317 11:24:30.080684  317731 out.go:201] 
	I0317 11:24:32.217812  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:34.716771  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:37.266163  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:24:37.266199  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:24:37.266210  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:24:37.266217  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:37.266225  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:24:37.266231  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:24:37.266236  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:24:37.266245  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:24:37.266251  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:24:37.266261  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:24:37.266275  271403 retry.go:31] will retry after 51.703965946s: missing components: kube-dns
	I0317 11:24:36.216864  341496 pod_ready.go:82] duration metric: took 4m0.004958001s for pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace to be "Ready" ...
	E0317 11:24:36.216891  341496 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0317 11:24:36.216901  341496 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-zwq6r" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.218595  341496 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-zwq6r" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-zwq6r" not found
	I0317 11:24:36.218617  341496 pod_ready.go:82] duration metric: took 1.707352ms for pod "coredns-668d6bf9bc-zwq6r" in "kube-system" namespace to be "Ready" ...
	E0317 11:24:36.218628  341496 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-zwq6r" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-zwq6r" not found
	I0317 11:24:36.218636  341496 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.222286  341496 pod_ready.go:93] pod "etcd-default-k8s-diff-port-627203" in "kube-system" namespace has status "Ready":"True"
	I0317 11:24:36.222302  341496 pod_ready.go:82] duration metric: took 3.659438ms for pod "etcd-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.222314  341496 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.225705  341496 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-627203" in "kube-system" namespace has status "Ready":"True"
	I0317 11:24:36.225722  341496 pod_ready.go:82] duration metric: took 3.400096ms for pod "kube-apiserver-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.225735  341496 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.228777  341496 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-627203" in "kube-system" namespace has status "Ready":"True"
	I0317 11:24:36.228794  341496 pod_ready.go:82] duration metric: took 3.051925ms for pod "kube-controller-manager-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.228805  341496 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lxqgz" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.415375  341496 pod_ready.go:93] pod "kube-proxy-lxqgz" in "kube-system" namespace has status "Ready":"True"
	I0317 11:24:36.415396  341496 pod_ready.go:82] duration metric: took 186.584372ms for pod "kube-proxy-lxqgz" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.415406  341496 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.814949  341496 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-627203" in "kube-system" namespace has status "Ready":"True"
	I0317 11:24:36.814974  341496 pod_ready.go:82] duration metric: took 399.56185ms for pod "kube-scheduler-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.814983  341496 pod_ready.go:39] duration metric: took 4m0.60773487s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:24:36.815000  341496 api_server.go:52] waiting for apiserver process to appear ...
	I0317 11:24:36.815049  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:24:36.815111  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:24:36.850770  341496 cri.go:89] found id: "ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610"
	I0317 11:24:36.850801  341496 cri.go:89] found id: ""
	I0317 11:24:36.850811  341496 logs.go:282] 1 containers: [ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610]
	I0317 11:24:36.850864  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:36.854204  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:24:36.854262  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:24:36.887650  341496 cri.go:89] found id: "bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe"
	I0317 11:24:36.887674  341496 cri.go:89] found id: ""
	I0317 11:24:36.887682  341496 logs.go:282] 1 containers: [bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe]
	I0317 11:24:36.887732  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:36.891072  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:24:36.891141  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:24:36.924018  341496 cri.go:89] found id: ""
	I0317 11:24:36.924041  341496 logs.go:282] 0 containers: []
	W0317 11:24:36.924052  341496 logs.go:284] No container was found matching "coredns"
	I0317 11:24:36.924059  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:24:36.924132  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:24:36.957551  341496 cri.go:89] found id: "17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9"
	I0317 11:24:36.957576  341496 cri.go:89] found id: ""
	I0317 11:24:36.957585  341496 logs.go:282] 1 containers: [17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9]
	I0317 11:24:36.957640  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:36.961125  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:24:36.961193  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:24:36.995097  341496 cri.go:89] found id: "a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba"
	I0317 11:24:36.995124  341496 cri.go:89] found id: ""
	I0317 11:24:36.995135  341496 logs.go:282] 1 containers: [a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba]
	I0317 11:24:36.995183  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:36.998558  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:24:36.998615  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:24:37.030703  341496 cri.go:89] found id: "e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0"
	I0317 11:24:37.030731  341496 cri.go:89] found id: ""
	I0317 11:24:37.030741  341496 logs.go:282] 1 containers: [e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0]
	I0317 11:24:37.030824  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:37.034348  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:24:37.034410  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:24:37.066880  341496 cri.go:89] found id: ""
	I0317 11:24:37.066922  341496 logs.go:282] 0 containers: []
	W0317 11:24:37.066933  341496 logs.go:284] No container was found matching "kindnet"
	I0317 11:24:37.066950  341496 logs.go:123] Gathering logs for etcd [bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe] ...
	I0317 11:24:37.066964  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe"
	I0317 11:24:37.104789  341496 logs.go:123] Gathering logs for kube-scheduler [17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9] ...
	I0317 11:24:37.104816  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9"
	I0317 11:24:37.143054  341496 logs.go:123] Gathering logs for kube-proxy [a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba] ...
	I0317 11:24:37.143083  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba"
	I0317 11:24:37.177575  341496 logs.go:123] Gathering logs for kube-controller-manager [e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0] ...
	I0317 11:24:37.177610  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0"
	I0317 11:24:37.223926  341496 logs.go:123] Gathering logs for containerd ...
	I0317 11:24:37.223956  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:24:37.272572  341496 logs.go:123] Gathering logs for kubelet ...
	I0317 11:24:37.272600  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:24:37.363184  341496 logs.go:123] Gathering logs for dmesg ...
	I0317 11:24:37.363214  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:24:37.384660  341496 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:24:37.384687  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:24:37.470494  341496 logs.go:123] Gathering logs for kube-apiserver [ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610] ...
	I0317 11:24:37.470522  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610"
	I0317 11:24:37.510318  341496 logs.go:123] Gathering logs for container status ...
	I0317 11:24:37.510345  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:24:40.047399  341496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 11:24:40.058642  341496 api_server.go:72] duration metric: took 4m4.568840658s to wait for apiserver process to appear ...
	I0317 11:24:40.058671  341496 api_server.go:88] waiting for apiserver healthz status ...
	I0317 11:24:40.058702  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:24:40.058747  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:24:40.094399  341496 cri.go:89] found id: "ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610"
	I0317 11:24:40.094426  341496 cri.go:89] found id: ""
	I0317 11:24:40.094436  341496 logs.go:282] 1 containers: [ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610]
	I0317 11:24:40.094492  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:40.098090  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:24:40.098151  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:24:40.130616  341496 cri.go:89] found id: "bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe"
	I0317 11:24:40.130634  341496 cri.go:89] found id: ""
	I0317 11:24:40.130641  341496 logs.go:282] 1 containers: [bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe]
	I0317 11:24:40.130686  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:40.133963  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:24:40.134022  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:24:40.166714  341496 cri.go:89] found id: ""
	I0317 11:24:40.166737  341496 logs.go:282] 0 containers: []
	W0317 11:24:40.166749  341496 logs.go:284] No container was found matching "coredns"
	I0317 11:24:40.166757  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:24:40.166814  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:24:40.200402  341496 cri.go:89] found id: "17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9"
	I0317 11:24:40.200428  341496 cri.go:89] found id: ""
	I0317 11:24:40.200438  341496 logs.go:282] 1 containers: [17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9]
	I0317 11:24:40.200498  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:40.203808  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:24:40.203882  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:24:40.237218  341496 cri.go:89] found id: "a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba"
	I0317 11:24:40.237243  341496 cri.go:89] found id: ""
	I0317 11:24:40.237254  341496 logs.go:282] 1 containers: [a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba]
	I0317 11:24:40.237312  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:40.240687  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:24:40.240741  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:24:40.273296  341496 cri.go:89] found id: "e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0"
	I0317 11:24:40.273317  341496 cri.go:89] found id: ""
	I0317 11:24:40.273326  341496 logs.go:282] 1 containers: [e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0]
	I0317 11:24:40.273393  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:40.277173  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:24:40.277247  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:24:40.308698  341496 cri.go:89] found id: ""
	I0317 11:24:40.308720  341496 logs.go:282] 0 containers: []
	W0317 11:24:40.308728  341496 logs.go:284] No container was found matching "kindnet"
	I0317 11:24:40.308740  341496 logs.go:123] Gathering logs for kube-scheduler [17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9] ...
	I0317 11:24:40.308752  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9"
	I0317 11:24:40.348491  341496 logs.go:123] Gathering logs for kube-proxy [a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba] ...
	I0317 11:24:40.348522  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba"
	I0317 11:24:40.381699  341496 logs.go:123] Gathering logs for kube-controller-manager [e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0] ...
	I0317 11:24:40.381727  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0"
	I0317 11:24:40.428909  341496 logs.go:123] Gathering logs for container status ...
	I0317 11:24:40.428937  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:24:40.464242  341496 logs.go:123] Gathering logs for etcd [bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe] ...
	I0317 11:24:40.464268  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe"
	I0317 11:24:40.502434  341496 logs.go:123] Gathering logs for containerd ...
	I0317 11:24:40.502464  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:24:40.549009  341496 logs.go:123] Gathering logs for kubelet ...
	I0317 11:24:40.549038  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:24:40.645736  341496 logs.go:123] Gathering logs for dmesg ...
	I0317 11:24:40.645768  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:24:40.667061  341496 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:24:40.667089  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:24:40.747006  341496 logs.go:123] Gathering logs for kube-apiserver [ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610] ...
	I0317 11:24:40.747040  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610"
	I0317 11:24:43.287988  341496 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0317 11:24:43.291755  341496 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0317 11:24:43.292626  341496 api_server.go:141] control plane version: v1.32.2
	I0317 11:24:43.292649  341496 api_server.go:131] duration metric: took 3.233971345s to wait for apiserver health ...
	I0317 11:24:43.292656  341496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 11:24:43.292676  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:24:43.292724  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:24:43.325112  341496 cri.go:89] found id: "ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610"
	I0317 11:24:43.325137  341496 cri.go:89] found id: ""
	I0317 11:24:43.325146  341496 logs.go:282] 1 containers: [ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610]
	I0317 11:24:43.325211  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:43.328726  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:24:43.328771  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:24:43.362728  341496 cri.go:89] found id: "bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe"
	I0317 11:24:43.362753  341496 cri.go:89] found id: ""
	I0317 11:24:43.362763  341496 logs.go:282] 1 containers: [bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe]
	I0317 11:24:43.362819  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:43.367669  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:24:43.367741  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:24:43.402187  341496 cri.go:89] found id: ""
	I0317 11:24:43.402216  341496 logs.go:282] 0 containers: []
	W0317 11:24:43.402227  341496 logs.go:284] No container was found matching "coredns"
	I0317 11:24:43.402234  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:24:43.402283  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:24:43.435445  341496 cri.go:89] found id: "17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9"
	I0317 11:24:43.435466  341496 cri.go:89] found id: ""
	I0317 11:24:43.435474  341496 logs.go:282] 1 containers: [17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9]
	I0317 11:24:43.435534  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:43.438732  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:24:43.438789  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:24:43.473195  341496 cri.go:89] found id: "a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba"
	I0317 11:24:43.473225  341496 cri.go:89] found id: ""
	I0317 11:24:43.473236  341496 logs.go:282] 1 containers: [a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba]
	I0317 11:24:43.473296  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:43.476550  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:24:43.476626  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:24:43.508805  341496 cri.go:89] found id: "e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0"
	I0317 11:24:43.508825  341496 cri.go:89] found id: ""
	I0317 11:24:43.508833  341496 logs.go:282] 1 containers: [e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0]
	I0317 11:24:43.508880  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:43.512124  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:24:43.512184  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:24:43.548896  341496 cri.go:89] found id: ""
	I0317 11:24:43.548917  341496 logs.go:282] 0 containers: []
	W0317 11:24:43.548926  341496 logs.go:284] No container was found matching "kindnet"
	I0317 11:24:43.548942  341496 logs.go:123] Gathering logs for container status ...
	I0317 11:24:43.548955  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:24:43.586167  341496 logs.go:123] Gathering logs for kube-scheduler [17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9] ...
	I0317 11:24:43.586208  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9"
	I0317 11:24:43.626501  341496 logs.go:123] Gathering logs for kube-proxy [a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba] ...
	I0317 11:24:43.626537  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba"
	I0317 11:24:43.659444  341496 logs.go:123] Gathering logs for kube-controller-manager [e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0] ...
	I0317 11:24:43.659470  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0"
	I0317 11:24:43.704387  341496 logs.go:123] Gathering logs for kubelet ...
	I0317 11:24:43.704417  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:24:43.793479  341496 logs.go:123] Gathering logs for dmesg ...
	I0317 11:24:43.793516  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:24:43.813483  341496 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:24:43.813522  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:24:43.899448  341496 logs.go:123] Gathering logs for kube-apiserver [ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610] ...
	I0317 11:24:43.899483  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610"
	I0317 11:24:43.941628  341496 logs.go:123] Gathering logs for etcd [bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe] ...
	I0317 11:24:43.941659  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe"
	I0317 11:24:43.981639  341496 logs.go:123] Gathering logs for containerd ...
	I0317 11:24:43.981675  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:24:45.462816  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:45.462854  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:45.462861  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:24:45.462869  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:45.462873  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:24:45.462878  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:24:45.462881  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:24:45.462885  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:24:45.462888  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:24:45.464780  326404 out.go:201] 
	W0317 11:24:45.466250  326404 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W0317 11:24:45.466269  326404 out.go:270] * 
	W0317 11:24:45.467129  326404 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0317 11:24:45.468845  326404 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1f4c46c704169       6e38f40d628db       9 minutes ago       Running             storage-provisioner       0                   ff79382a2c0a5       storage-provisioner
	b0f53335c4a2e       f1332858868e1       9 minutes ago       Running             kube-proxy                0                   51b34b2ead6ef       kube-proxy-dw92z
	b219bca090425       d8e673e7c9983       9 minutes ago       Running             kube-scheduler            0                   656d300f0b066       kube-scheduler-no-preload-189670
	3309296ea8414       85b7a174738ba       9 minutes ago       Running             kube-apiserver            0                   c4095ba340318       kube-apiserver-no-preload-189670
	9403b6e07fa0d       a9e7e6b294baf       9 minutes ago       Running             etcd                      0                   e3327c799ce46       etcd-no-preload-189670
	330fa8830110a       b6a454c5a800d       9 minutes ago       Running             kube-controller-manager   0                   23c4756e6900e       kube-controller-manager-no-preload-189670
	
	
	==> containerd <==
	Mar 17 11:22:13 no-preload-189670 containerd[879]: time="2025-03-17T11:22:13.156664965Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b48c555275aefbd46134da5b3ee84667004d2d5472505c0c488c07071d73f19d\": failed to find network info for sandbox \"b48c555275aefbd46134da5b3ee84667004d2d5472505c0c488c07071d73f19d\""
	Mar 17 11:22:25 no-preload-189670 containerd[879]: time="2025-03-17T11:22:25.137644294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:22:25 no-preload-189670 containerd[879]: time="2025-03-17T11:22:25.156447643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe3dff1fc92c68318911e798ec2f80827d7c5474f3a719cbe390834ebe00b211\": failed to find network info for sandbox \"fe3dff1fc92c68318911e798ec2f80827d7c5474f3a719cbe390834ebe00b211\""
	Mar 17 11:22:40 no-preload-189670 containerd[879]: time="2025-03-17T11:22:40.138078637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:22:40 no-preload-189670 containerd[879]: time="2025-03-17T11:22:40.156996116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"466146b25e2bf9f8281407b7393185d4068310be94133d7ba5c5db458f8a42f1\": failed to find network info for sandbox \"466146b25e2bf9f8281407b7393185d4068310be94133d7ba5c5db458f8a42f1\""
	Mar 17 11:22:53 no-preload-189670 containerd[879]: time="2025-03-17T11:22:53.137823107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:22:53 no-preload-189670 containerd[879]: time="2025-03-17T11:22:53.156600172Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bdc1ca904aae2a9f5e59badf3c19b8aa1f273fa4d1aa7d53606de6d341645385\": failed to find network info for sandbox \"bdc1ca904aae2a9f5e59badf3c19b8aa1f273fa4d1aa7d53606de6d341645385\""
	Mar 17 11:23:05 no-preload-189670 containerd[879]: time="2025-03-17T11:23:05.138032356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:23:05 no-preload-189670 containerd[879]: time="2025-03-17T11:23:05.159916221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cfde08d5815c8a24faa601956917bd7f93a2734d1246635e88dc4640189ea7e8\": failed to find network info for sandbox \"cfde08d5815c8a24faa601956917bd7f93a2734d1246635e88dc4640189ea7e8\""
	Mar 17 11:23:17 no-preload-189670 containerd[879]: time="2025-03-17T11:23:17.136961984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:23:17 no-preload-189670 containerd[879]: time="2025-03-17T11:23:17.155549832Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1e351e1b7e73f19c9f7639a421b45a66bf72a300699eed5f4ecaa9bc7e1baf5c\": failed to find network info for sandbox \"1e351e1b7e73f19c9f7639a421b45a66bf72a300699eed5f4ecaa9bc7e1baf5c\""
	Mar 17 11:23:28 no-preload-189670 containerd[879]: time="2025-03-17T11:23:28.137399552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:23:28 no-preload-189670 containerd[879]: time="2025-03-17T11:23:28.156013234Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7f1b31c3a06d64d0e3f5b3c7cc665754f961520a18a6daeffc3628f55c767053\": failed to find network info for sandbox \"7f1b31c3a06d64d0e3f5b3c7cc665754f961520a18a6daeffc3628f55c767053\""
	Mar 17 11:23:40 no-preload-189670 containerd[879]: time="2025-03-17T11:23:40.137701030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:23:40 no-preload-189670 containerd[879]: time="2025-03-17T11:23:40.157253753Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"db0dcfce7c4f273ceb0a85fabeba3b9f2df5bfecdb80fe84abd557d33c212262\": failed to find network info for sandbox \"db0dcfce7c4f273ceb0a85fabeba3b9f2df5bfecdb80fe84abd557d33c212262\""
	Mar 17 11:23:52 no-preload-189670 containerd[879]: time="2025-03-17T11:23:52.137605214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:23:52 no-preload-189670 containerd[879]: time="2025-03-17T11:23:52.159843632Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f516367b20a1cfaf10509cf979576acff736f6dba957033ea166b4ce33fd6b88\": failed to find network info for sandbox \"f516367b20a1cfaf10509cf979576acff736f6dba957033ea166b4ce33fd6b88\""
	Mar 17 11:24:03 no-preload-189670 containerd[879]: time="2025-03-17T11:24:03.138144677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:24:03 no-preload-189670 containerd[879]: time="2025-03-17T11:24:03.156993636Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"be9658f1662a6f2ed37af5b9ff7e13f15a2cda424a76c469d7f6ab7875ac79c4\": failed to find network info for sandbox \"be9658f1662a6f2ed37af5b9ff7e13f15a2cda424a76c469d7f6ab7875ac79c4\""
	Mar 17 11:24:18 no-preload-189670 containerd[879]: time="2025-03-17T11:24:18.137781763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:24:18 no-preload-189670 containerd[879]: time="2025-03-17T11:24:18.157225721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3b2a91aece6f6f67c11c35254f8898a2552c08543d6eb17baadbbc333aac34dd\": failed to find network info for sandbox \"3b2a91aece6f6f67c11c35254f8898a2552c08543d6eb17baadbbc333aac34dd\""
	Mar 17 11:24:31 no-preload-189670 containerd[879]: time="2025-03-17T11:24:31.137840948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:24:31 no-preload-189670 containerd[879]: time="2025-03-17T11:24:31.160042654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3be554c27e0803871c05d11013931cd43319058342629c245ba5d348b4913769\": failed to find network info for sandbox \"3be554c27e0803871c05d11013931cd43319058342629c245ba5d348b4913769\""
	Mar 17 11:24:44 no-preload-189670 containerd[879]: time="2025-03-17T11:24:44.137301158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:24:44 no-preload-189670 containerd[879]: time="2025-03-17T11:24:44.155895426Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nrkfd,Uid:20fa0930-1a0e-4878-a0a6-91d0cc8a89f9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf66a77283955d1c20beffe096572ab13acc3afc5a0e93e1ac01a916deb7e836\": failed to find network info for sandbox \"cf66a77283955d1c20beffe096572ab13acc3afc5a0e93e1ac01a916deb7e836\""
	
	
	==> describe nodes <==
	Name:               no-preload-189670
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-189670
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76
	                    minikube.k8s.io/name=no-preload-189670
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_03_17T11_15_03_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 11:14:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-189670
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 11:24:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 11:22:19 +0000   Mon, 17 Mar 2025 11:14:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 11:22:19 +0000   Mon, 17 Mar 2025 11:14:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 11:22:19 +0000   Mon, 17 Mar 2025 11:14:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 11:22:19 +0000   Mon, 17 Mar 2025 11:14:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-189670
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	System Info:
	  Machine ID:                 3704c81054ef4e1aa1566499f9c7616a
	  System UUID:                37745217-ac6a-4b4d-8bd0-20d92b85181d
	  Boot ID:                    6cdff8eb-9dff-46dc-b46a-15af38578335
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.25
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-nrkfd                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m39s
	  kube-system                 etcd-no-preload-189670                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m44s
	  kube-system                 kindnet-x964l                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m40s
	  kube-system                 kube-apiserver-no-preload-189670             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m44s
	  kube-system                 kube-controller-manager-no-preload-189670    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m44s
	  kube-system                 kube-proxy-dw92z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	  kube-system                 kube-scheduler-no-preload-189670             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m44s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 9m38s  kube-proxy       
	  Normal   Starting                 9m44s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m44s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  9m44s  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9m44s  kubelet          Node no-preload-189670 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m44s  kubelet          Node no-preload-189670 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m44s  kubelet          Node no-preload-189670 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m40s  node-controller  Node no-preload-189670 event: Registered Node no-preload-189670 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2a 9f 34 c1 3c 2d 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ea db 01 46 f3 5d 08 06
	[Mar17 11:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 03 06 1a ae 04 08 06
	[Mar17 11:11] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ba d0 41 5a 57 08 06
	[  +0.000337] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 03 06 1a ae 04 08 06
	[ +43.804696] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 68 f0 20 09 1d 08 06
	[  +0.014204] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 35 88 eb 1a ca 08 06
	[Mar17 11:12] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 40 5e e0 f5 10 08 06
	[  +0.000328] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff da 68 f0 20 09 1d 08 06
	[Mar17 11:13] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9e 9d fa 19 03 e5 08 06
	[  +0.000467] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7a 6b 3f 12 54 e7 08 06
	[Mar17 11:14] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 15 0b 3c 2b d0 08 06
	[  +0.000401] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7a 6b 3f 12 54 e7 08 06
	
	
	==> etcd [9403b6e07fa0d9bff6a10a10b218514fab81e9cb4957d942fcf73e0ad8f038dd] <==
	{"level":"info","ts":"2025-03-17T11:14:57.522953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-03-17T11:14:57.522963Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-03-17T11:14:57.523478Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-03-17T11:14:57.523079Z","caller":"etcdserver/server.go:757","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f23060b075c4c089","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-03-17T11:14:57.523560Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-03-17T11:14:57.654306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-03-17T11:14:57.654358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-03-17T11:14:57.654402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-03-17T11:14:57.654423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-03-17T11:14:57.654442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-03-17T11:14:57.654456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-03-17T11:14:57.654470Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-03-17T11:14:57.655865Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:no-preload-189670 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-03-17T11:14:57.655909Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T11:14:57.656004Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T11:14:57.656340Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T11:14:57.656490Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-03-17T11:14:57.656518Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-03-17T11:14:57.657055Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T11:14:57.657313Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T11:14:57.657898Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-03-17T11:14:57.658076Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-03-17T11:14:57.704220Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T11:14:57.704457Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T11:14:57.704527Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 11:24:46 up  1:06,  0 users,  load average: 1.08, 0.98, 1.24
	Linux no-preload-189670 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [3309296ea8414c0fb74c0936fe4b0fcc6593b7bf1d784d4506a0ddd69e3ece3b] <==
	I0317 11:14:59.704092       1 shared_informer.go:320] Caches are synced for configmaps
	I0317 11:14:59.704222       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0317 11:14:59.705093       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0317 11:14:59.705131       1 policy_source.go:240] refreshing policies
	I0317 11:14:59.705411       1 controller.go:615] quota admission added evaluator for: namespaces
	I0317 11:14:59.703668       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0317 11:14:59.707437       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0317 11:14:59.708944       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0317 11:14:59.719120       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0317 11:14:59.803718       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0317 11:15:00.548447       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0317 11:15:00.553597       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0317 11:15:00.553622       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0317 11:15:01.005713       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0317 11:15:01.044450       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0317 11:15:01.114160       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0317 11:15:01.120425       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I0317 11:15:01.121565       1 controller.go:615] quota admission added evaluator for: endpoints
	I0317 11:15:01.126298       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0317 11:15:01.627845       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0317 11:15:02.248871       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0317 11:15:02.259596       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0317 11:15:02.273022       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0317 11:15:06.929926       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0317 11:15:07.129069       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [330fa8830110af11165ed37adfb7690dd3a58adf52367139a00e09f674b88248] <==
	I0317 11:15:06.180778       1 shared_informer.go:320] Caches are synced for namespace
	I0317 11:15:06.183113       1 shared_informer.go:320] Caches are synced for node
	I0317 11:15:06.183301       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0317 11:15:06.183352       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0317 11:15:06.183357       1 shared_informer.go:320] Caches are synced for resource quota
	I0317 11:15:06.183364       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0317 11:15:06.183374       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0317 11:15:06.184274       1 shared_informer.go:320] Caches are synced for HPA
	I0317 11:15:06.189505       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-189670" podCIDRs=["10.244.0.0/24"]
	I0317 11:15:06.189537       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-189670"
	I0317 11:15:06.189568       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-189670"
	I0317 11:15:06.200678       1 shared_informer.go:320] Caches are synced for garbage collector
	I0317 11:15:06.983669       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-189670"
	I0317 11:15:07.330629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="197.679826ms"
	I0317 11:15:07.408937       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="78.244913ms"
	I0317 11:15:07.409071       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="86.854µs"
	I0317 11:15:07.425292       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="62.452µs"
	I0317 11:15:08.207803       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="67.013773ms"
	I0317 11:15:08.215548       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="7.690539ms"
	I0317 11:15:08.215684       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="94.564µs"
	I0317 11:15:09.257096       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="77.927µs"
	I0317 11:15:09.262840       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="74.043µs"
	I0317 11:15:09.266440       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="84.613µs"
	I0317 11:15:12.757947       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-189670"
	I0317 11:22:19.734762       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-189670"
	
	
	==> kube-proxy [b0f53335c4a2ea5f2da7fa7af13194a131decea5b6a4bb229f9e627b4e81fa0e] <==
	I0317 11:15:08.018418       1 server_linux.go:66] "Using iptables proxy"
	I0317 11:15:08.229702       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	E0317 11:15:08.229828       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0317 11:15:08.314347       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0317 11:15:08.314431       1 server_linux.go:170] "Using iptables Proxier"
	I0317 11:15:08.317301       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0317 11:15:08.317756       1 server.go:497] "Version info" version="v1.32.2"
	I0317 11:15:08.317778       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 11:15:08.319172       1 config.go:105] "Starting endpoint slice config controller"
	I0317 11:15:08.319213       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0317 11:15:08.319394       1 config.go:199] "Starting service config controller"
	I0317 11:15:08.319409       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0317 11:15:08.319624       1 config.go:329] "Starting node config controller"
	I0317 11:15:08.319638       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0317 11:15:08.420231       1 shared_informer.go:320] Caches are synced for service config
	I0317 11:15:08.420230       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0317 11:15:08.420252       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b219bca090425646e62ca112c20f32106f2539942e5fc365f8237031e7c95c99] <==
	E0317 11:14:59.723538       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	E0317 11:14:59.723157       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0317 11:14:59.723614       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0317 11:14:59.723563       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:14:59.723731       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0317 11:14:59.723785       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0317 11:14:59.723836       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0317 11:14:59.723882       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:14:59.723992       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0317 11:14:59.724016       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:15:00.579085       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0317 11:15:00.579136       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 11:15:00.581081       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0317 11:15:00.581124       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:15:00.725537       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0317 11:15:00.725577       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:15:00.771135       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0317 11:15:00.771182       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:15:00.785703       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0317 11:15:00.785765       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:15:00.807355       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0317 11:15:00.807416       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:15:00.849150       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0317 11:15:00.849193       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0317 11:15:01.219716       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 17 11:23:52 no-preload-189670 kubelet[2343]: E0317 11:23:52.160055    2343 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f516367b20a1cfaf10509cf979576acff736f6dba957033ea166b4ce33fd6b88\": failed to find network info for sandbox \"f516367b20a1cfaf10509cf979576acff736f6dba957033ea166b4ce33fd6b88\""
	Mar 17 11:23:52 no-preload-189670 kubelet[2343]: E0317 11:23:52.160120    2343 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f516367b20a1cfaf10509cf979576acff736f6dba957033ea166b4ce33fd6b88\": failed to find network info for sandbox \"f516367b20a1cfaf10509cf979576acff736f6dba957033ea166b4ce33fd6b88\"" pod="kube-system/coredns-668d6bf9bc-nrkfd"
	Mar 17 11:23:52 no-preload-189670 kubelet[2343]: E0317 11:23:52.160141    2343 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f516367b20a1cfaf10509cf979576acff736f6dba957033ea166b4ce33fd6b88\": failed to find network info for sandbox \"f516367b20a1cfaf10509cf979576acff736f6dba957033ea166b4ce33fd6b88\"" pod="kube-system/coredns-668d6bf9bc-nrkfd"
	Mar 17 11:23:52 no-preload-189670 kubelet[2343]: E0317 11:23:52.160182    2343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nrkfd_kube-system(20fa0930-1a0e-4878-a0a6-91d0cc8a89f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nrkfd_kube-system(20fa0930-1a0e-4878-a0a6-91d0cc8a89f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f516367b20a1cfaf10509cf979576acff736f6dba957033ea166b4ce33fd6b88\\\": failed to find network info for sandbox \\\"f516367b20a1cfaf10509cf979576acff736f6dba957033ea166b4ce33fd6b88\\\"\"" pod="kube-system/coredns-668d6bf9bc-nrkfd" podUID="20fa0930-1a0e-4878-a0a6-91d0cc8a89f9"
	Mar 17 11:23:55 no-preload-189670 kubelet[2343]: E0317 11:23:55.138089    2343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-x964l" podUID="73733da1-487f-4ec5-a874-944d550d90d2"
	Mar 17 11:24:03 no-preload-189670 kubelet[2343]: E0317 11:24:03.157261    2343 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be9658f1662a6f2ed37af5b9ff7e13f15a2cda424a76c469d7f6ab7875ac79c4\": failed to find network info for sandbox \"be9658f1662a6f2ed37af5b9ff7e13f15a2cda424a76c469d7f6ab7875ac79c4\""
	Mar 17 11:24:03 no-preload-189670 kubelet[2343]: E0317 11:24:03.157335    2343 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be9658f1662a6f2ed37af5b9ff7e13f15a2cda424a76c469d7f6ab7875ac79c4\": failed to find network info for sandbox \"be9658f1662a6f2ed37af5b9ff7e13f15a2cda424a76c469d7f6ab7875ac79c4\"" pod="kube-system/coredns-668d6bf9bc-nrkfd"
	Mar 17 11:24:03 no-preload-189670 kubelet[2343]: E0317 11:24:03.157357    2343 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be9658f1662a6f2ed37af5b9ff7e13f15a2cda424a76c469d7f6ab7875ac79c4\": failed to find network info for sandbox \"be9658f1662a6f2ed37af5b9ff7e13f15a2cda424a76c469d7f6ab7875ac79c4\"" pod="kube-system/coredns-668d6bf9bc-nrkfd"
	Mar 17 11:24:03 no-preload-189670 kubelet[2343]: E0317 11:24:03.157399    2343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nrkfd_kube-system(20fa0930-1a0e-4878-a0a6-91d0cc8a89f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nrkfd_kube-system(20fa0930-1a0e-4878-a0a6-91d0cc8a89f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be9658f1662a6f2ed37af5b9ff7e13f15a2cda424a76c469d7f6ab7875ac79c4\\\": failed to find network info for sandbox \\\"be9658f1662a6f2ed37af5b9ff7e13f15a2cda424a76c469d7f6ab7875ac79c4\\\"\"" pod="kube-system/coredns-668d6bf9bc-nrkfd" podUID="20fa0930-1a0e-4878-a0a6-91d0cc8a89f9"
	Mar 17 11:24:06 no-preload-189670 kubelet[2343]: E0317 11:24:06.139579    2343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-x964l" podUID="73733da1-487f-4ec5-a874-944d550d90d2"
	Mar 17 11:24:18 no-preload-189670 kubelet[2343]: E0317 11:24:18.137978    2343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-x964l" podUID="73733da1-487f-4ec5-a874-944d550d90d2"
	Mar 17 11:24:18 no-preload-189670 kubelet[2343]: E0317 11:24:18.157452    2343 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b2a91aece6f6f67c11c35254f8898a2552c08543d6eb17baadbbc333aac34dd\": failed to find network info for sandbox \"3b2a91aece6f6f67c11c35254f8898a2552c08543d6eb17baadbbc333aac34dd\""
	Mar 17 11:24:18 no-preload-189670 kubelet[2343]: E0317 11:24:18.157542    2343 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b2a91aece6f6f67c11c35254f8898a2552c08543d6eb17baadbbc333aac34dd\": failed to find network info for sandbox \"3b2a91aece6f6f67c11c35254f8898a2552c08543d6eb17baadbbc333aac34dd\"" pod="kube-system/coredns-668d6bf9bc-nrkfd"
	Mar 17 11:24:18 no-preload-189670 kubelet[2343]: E0317 11:24:18.157573    2343 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b2a91aece6f6f67c11c35254f8898a2552c08543d6eb17baadbbc333aac34dd\": failed to find network info for sandbox \"3b2a91aece6f6f67c11c35254f8898a2552c08543d6eb17baadbbc333aac34dd\"" pod="kube-system/coredns-668d6bf9bc-nrkfd"
	Mar 17 11:24:18 no-preload-189670 kubelet[2343]: E0317 11:24:18.157621    2343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nrkfd_kube-system(20fa0930-1a0e-4878-a0a6-91d0cc8a89f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nrkfd_kube-system(20fa0930-1a0e-4878-a0a6-91d0cc8a89f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b2a91aece6f6f67c11c35254f8898a2552c08543d6eb17baadbbc333aac34dd\\\": failed to find network info for sandbox \\\"3b2a91aece6f6f67c11c35254f8898a2552c08543d6eb17baadbbc333aac34dd\\\"\"" pod="kube-system/coredns-668d6bf9bc-nrkfd" podUID="20fa0930-1a0e-4878-a0a6-91d0cc8a89f9"
	Mar 17 11:24:31 no-preload-189670 kubelet[2343]: E0317 11:24:31.160365    2343 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3be554c27e0803871c05d11013931cd43319058342629c245ba5d348b4913769\": failed to find network info for sandbox \"3be554c27e0803871c05d11013931cd43319058342629c245ba5d348b4913769\""
	Mar 17 11:24:31 no-preload-189670 kubelet[2343]: E0317 11:24:31.160451    2343 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3be554c27e0803871c05d11013931cd43319058342629c245ba5d348b4913769\": failed to find network info for sandbox \"3be554c27e0803871c05d11013931cd43319058342629c245ba5d348b4913769\"" pod="kube-system/coredns-668d6bf9bc-nrkfd"
	Mar 17 11:24:31 no-preload-189670 kubelet[2343]: E0317 11:24:31.160479    2343 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3be554c27e0803871c05d11013931cd43319058342629c245ba5d348b4913769\": failed to find network info for sandbox \"3be554c27e0803871c05d11013931cd43319058342629c245ba5d348b4913769\"" pod="kube-system/coredns-668d6bf9bc-nrkfd"
	Mar 17 11:24:31 no-preload-189670 kubelet[2343]: E0317 11:24:31.160572    2343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nrkfd_kube-system(20fa0930-1a0e-4878-a0a6-91d0cc8a89f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nrkfd_kube-system(20fa0930-1a0e-4878-a0a6-91d0cc8a89f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3be554c27e0803871c05d11013931cd43319058342629c245ba5d348b4913769\\\": failed to find network info for sandbox \\\"3be554c27e0803871c05d11013931cd43319058342629c245ba5d348b4913769\\\"\"" pod="kube-system/coredns-668d6bf9bc-nrkfd" podUID="20fa0930-1a0e-4878-a0a6-91d0cc8a89f9"
	Mar 17 11:24:33 no-preload-189670 kubelet[2343]: E0317 11:24:33.137447    2343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-x964l" podUID="73733da1-487f-4ec5-a874-944d550d90d2"
	Mar 17 11:24:44 no-preload-189670 kubelet[2343]: E0317 11:24:44.156118    2343 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf66a77283955d1c20beffe096572ab13acc3afc5a0e93e1ac01a916deb7e836\": failed to find network info for sandbox \"cf66a77283955d1c20beffe096572ab13acc3afc5a0e93e1ac01a916deb7e836\""
	Mar 17 11:24:44 no-preload-189670 kubelet[2343]: E0317 11:24:44.156180    2343 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf66a77283955d1c20beffe096572ab13acc3afc5a0e93e1ac01a916deb7e836\": failed to find network info for sandbox \"cf66a77283955d1c20beffe096572ab13acc3afc5a0e93e1ac01a916deb7e836\"" pod="kube-system/coredns-668d6bf9bc-nrkfd"
	Mar 17 11:24:44 no-preload-189670 kubelet[2343]: E0317 11:24:44.156202    2343 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf66a77283955d1c20beffe096572ab13acc3afc5a0e93e1ac01a916deb7e836\": failed to find network info for sandbox \"cf66a77283955d1c20beffe096572ab13acc3afc5a0e93e1ac01a916deb7e836\"" pod="kube-system/coredns-668d6bf9bc-nrkfd"
	Mar 17 11:24:44 no-preload-189670 kubelet[2343]: E0317 11:24:44.156242    2343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nrkfd_kube-system(20fa0930-1a0e-4878-a0a6-91d0cc8a89f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nrkfd_kube-system(20fa0930-1a0e-4878-a0a6-91d0cc8a89f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf66a77283955d1c20beffe096572ab13acc3afc5a0e93e1ac01a916deb7e836\\\": failed to find network info for sandbox \\\"cf66a77283955d1c20beffe096572ab13acc3afc5a0e93e1ac01a916deb7e836\\\"\"" pod="kube-system/coredns-668d6bf9bc-nrkfd" podUID="20fa0930-1a0e-4878-a0a6-91d0cc8a89f9"
	Mar 17 11:24:45 no-preload-189670 kubelet[2343]: E0317 11:24:45.137987    2343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-x964l" podUID="73733da1-487f-4ec5-a874-944d550d90d2"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-189670 -n no-preload-189670
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-189670 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-668d6bf9bc-nrkfd kindnet-x964l
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-189670 describe pod coredns-668d6bf9bc-nrkfd kindnet-x964l
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-189670 describe pod coredns-668d6bf9bc-nrkfd kindnet-x964l: exit status 1 (58.985174ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-nrkfd" not found
	Error from server (NotFound): pods "kindnet-x964l" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-189670 describe pod coredns-668d6bf9bc-nrkfd kindnet-x964l: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (609.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (571.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-627203 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
E0317 11:20:11.555462   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/custom-flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:20:13.802351   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/bridge-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:20:30.270914   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/enable-default-cni-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:20:50.080562   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:21:03.387312   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/auto-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:21:17.783118   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:21:52.192476   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/enable-default-cni-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:22:29.943931   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/bridge-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:22:57.644519   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/bridge-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:23:19.528512   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/auto-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:23:47.229028   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/auto-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:23:57.863416   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:24:08.330476   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/enable-default-cni-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:24:14.788794   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-diff-port-627203 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: exit status 80 (9m29.615944448s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-627203] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20535
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "default-k8s-diff-port-627203" primary control-plane node in "default-k8s-diff-port-627203" cluster
	* Pulling base image v0.0.46-1741860993-20523 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 11:20:09.951775  341496 out.go:345] Setting OutFile to fd 1 ...
	I0317 11:20:09.951911  341496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:20:09.951918  341496 out.go:358] Setting ErrFile to fd 2...
	I0317 11:20:09.951924  341496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:20:09.952147  341496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
	I0317 11:20:09.952741  341496 out.go:352] Setting JSON to false
	I0317 11:20:09.954025  341496 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3703,"bootTime":1742206707,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 11:20:09.954091  341496 start.go:139] virtualization: kvm guest
	I0317 11:20:09.956439  341496 out.go:177] * [default-k8s-diff-port-627203] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 11:20:09.957897  341496 out.go:177]   - MINIKUBE_LOCATION=20535
	I0317 11:20:09.957990  341496 notify.go:220] Checking for updates...
	I0317 11:20:09.960721  341496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 11:20:09.962333  341496 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 11:20:09.963810  341496 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	I0317 11:20:09.965290  341496 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 11:20:09.966759  341496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 11:20:09.968637  341496 config.go:182] Loaded profile config "calico-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:20:09.968800  341496 config.go:182] Loaded profile config "no-preload-189670": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:20:09.968922  341496 config.go:182] Loaded profile config "old-k8s-version-702762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0317 11:20:09.969134  341496 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 11:20:09.994726  341496 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 11:20:09.994957  341496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 11:20:10.047464  341496 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-03-17 11:20:10.037717036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 11:20:10.047559  341496 docker.go:318] overlay module found
	I0317 11:20:10.049461  341496 out.go:177] * Using the docker driver based on user configuration
	I0317 11:20:10.050764  341496 start.go:297] selected driver: docker
	I0317 11:20:10.050780  341496 start.go:901] validating driver "docker" against <nil>
	I0317 11:20:10.050795  341496 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 11:20:10.051718  341496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 11:20:10.105955  341496 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-03-17 11:20:10.096342154 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 11:20:10.106128  341496 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 11:20:10.106353  341496 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 11:20:10.108473  341496 out.go:177] * Using Docker driver with root privileges
	I0317 11:20:10.109937  341496 cni.go:84] Creating CNI manager for ""
	I0317 11:20:10.110100  341496 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 11:20:10.110117  341496 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0317 11:20:10.110220  341496 start.go:340] cluster config:
	{Name:default-k8s-diff-port-627203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-627203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:20:10.111829  341496 out.go:177] * Starting "default-k8s-diff-port-627203" primary control-plane node in "default-k8s-diff-port-627203" cluster
	I0317 11:20:10.113031  341496 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0317 11:20:10.114478  341496 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0317 11:20:10.115992  341496 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:20:10.116043  341496 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0317 11:20:10.116053  341496 cache.go:56] Caching tarball of preloaded images
	I0317 11:20:10.116120  341496 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0317 11:20:10.116149  341496 preload.go:172] Found /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 11:20:10.116162  341496 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0317 11:20:10.116325  341496 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/config.json ...
	I0317 11:20:10.116351  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/config.json: {Name:mk848192ef1b40ae1077b4c3a36047479a0034b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:10.138687  341496 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0317 11:20:10.138707  341496 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0317 11:20:10.138729  341496 cache.go:230] Successfully downloaded all kic artifacts
	I0317 11:20:10.138768  341496 start.go:360] acquireMachinesLock for default-k8s-diff-port-627203: {Name:mkcbff1d84866f612a979fbe06c726407300b170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 11:20:10.138896  341496 start.go:364] duration metric: took 104.168µs to acquireMachinesLock for "default-k8s-diff-port-627203"
	I0317 11:20:10.138925  341496 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-627203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-627203 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 11:20:10.139000  341496 start.go:125] createHost starting for "" (driver="docker")
	I0317 11:20:10.141230  341496 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0317 11:20:10.141482  341496 start.go:159] libmachine.API.Create for "default-k8s-diff-port-627203" (driver="docker")
	I0317 11:20:10.141513  341496 client.go:168] LocalClient.Create starting
	I0317 11:20:10.141581  341496 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem
	I0317 11:20:10.141611  341496 main.go:141] libmachine: Decoding PEM data...
	I0317 11:20:10.141625  341496 main.go:141] libmachine: Parsing certificate...
	I0317 11:20:10.141678  341496 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem
	I0317 11:20:10.141696  341496 main.go:141] libmachine: Decoding PEM data...
	I0317 11:20:10.141706  341496 main.go:141] libmachine: Parsing certificate...
	I0317 11:20:10.142029  341496 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-627203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0317 11:20:10.160384  341496 cli_runner.go:211] docker network inspect default-k8s-diff-port-627203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0317 11:20:10.160474  341496 network_create.go:284] running [docker network inspect default-k8s-diff-port-627203] to gather additional debugging logs...
	I0317 11:20:10.160501  341496 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-627203
	W0317 11:20:10.178195  341496 cli_runner.go:211] docker network inspect default-k8s-diff-port-627203 returned with exit code 1
	I0317 11:20:10.178227  341496 network_create.go:287] error running [docker network inspect default-k8s-diff-port-627203]: docker network inspect default-k8s-diff-port-627203: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-627203 not found
	I0317 11:20:10.178241  341496 network_create.go:289] output of [docker network inspect default-k8s-diff-port-627203]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-627203 not found
	
	** /stderr **
	I0317 11:20:10.178338  341496 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 11:20:10.197679  341496 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6a2ef9d4bc68 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9a:4d:91:26:57:2c} reservation:<nil>}
	I0317 11:20:10.198624  341496 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-00bf62ef0133 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:c5:34:86:d6:21} reservation:<nil>}
	I0317 11:20:10.199639  341496 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-81e0001ceae7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:6a:cf:1c:79:e6} reservation:<nil>}
	I0317 11:20:10.200718  341496 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d24500}
	I0317 11:20:10.200739  341496 network_create.go:124] attempt to create docker network default-k8s-diff-port-627203 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0317 11:20:10.200784  341496 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-627203 default-k8s-diff-port-627203
	I0317 11:20:10.255439  341496 network_create.go:108] docker network default-k8s-diff-port-627203 192.168.76.0/24 created
	I0317 11:20:10.255568  341496 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-627203" container
	I0317 11:20:10.255629  341496 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0317 11:20:10.274724  341496 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-627203 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-627203 --label created_by.minikube.sigs.k8s.io=true
	I0317 11:20:10.294680  341496 oci.go:103] Successfully created a docker volume default-k8s-diff-port-627203
	I0317 11:20:10.294772  341496 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-627203-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-627203 --entrypoint /usr/bin/test -v default-k8s-diff-port-627203:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0317 11:20:10.747828  341496 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-627203
	I0317 11:20:10.747877  341496 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:20:10.747900  341496 kic.go:194] Starting extracting preloaded images to volume ...
	I0317 11:20:10.747969  341496 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-627203:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0317 11:20:15.344266  341496 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-627203:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.596232772s)
	I0317 11:20:15.344302  341496 kic.go:203] duration metric: took 4.596396796s to extract preloaded images to volume ...
	W0317 11:20:15.344459  341496 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0317 11:20:15.344607  341496 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0317 11:20:15.397506  341496 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-627203 --name default-k8s-diff-port-627203 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-627203 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-627203 --network default-k8s-diff-port-627203 --ip 192.168.76.2 --volume default-k8s-diff-port-627203:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0317 11:20:15.665923  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Running}}
	I0317 11:20:15.686899  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:15.706866  341496 cli_runner.go:164] Run: docker exec default-k8s-diff-port-627203 stat /var/lib/dpkg/alternatives/iptables
	I0317 11:20:15.749402  341496 oci.go:144] the created container "default-k8s-diff-port-627203" has a running status.
	I0317 11:20:15.749447  341496 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa...
	I0317 11:20:15.892302  341496 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0317 11:20:15.918468  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:15.941520  341496 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0317 11:20:15.941545  341496 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-627203 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0317 11:20:15.989310  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:16.010066  341496 machine.go:93] provisionDockerMachine start ...
	I0317 11:20:16.010194  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:16.033285  341496 main.go:141] libmachine: Using SSH client type: native
	I0317 11:20:16.033637  341496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0317 11:20:16.033665  341496 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 11:20:16.034656  341496 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46524->127.0.0.1:33103: read: connection reset by peer
	I0317 11:20:19.170824  341496 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-627203
	
	I0317 11:20:19.170859  341496 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-627203"
	I0317 11:20:19.170929  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:19.189150  341496 main.go:141] libmachine: Using SSH client type: native
	I0317 11:20:19.189434  341496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0317 11:20:19.189452  341496 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-627203 && echo "default-k8s-diff-port-627203" | sudo tee /etc/hostname
	I0317 11:20:19.334316  341496 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-627203
	
	I0317 11:20:19.334392  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:19.351482  341496 main.go:141] libmachine: Using SSH client type: native
	I0317 11:20:19.351684  341496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0317 11:20:19.351701  341496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-627203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-627203/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-627203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 11:20:19.483211  341496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 11:20:19.483289  341496 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20535-4918/.minikube CaCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20535-4918/.minikube}
	I0317 11:20:19.483331  341496 ubuntu.go:177] setting up certificates
	I0317 11:20:19.483341  341496 provision.go:84] configureAuth start
	I0317 11:20:19.483396  341496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-627203
	I0317 11:20:19.500645  341496 provision.go:143] copyHostCerts
	I0317 11:20:19.500703  341496 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem, removing ...
	I0317 11:20:19.500713  341496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem
	I0317 11:20:19.500773  341496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem (1082 bytes)
	I0317 11:20:19.500859  341496 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem, removing ...
	I0317 11:20:19.500868  341496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem
	I0317 11:20:19.500892  341496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem (1123 bytes)
	I0317 11:20:19.500946  341496 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem, removing ...
	I0317 11:20:19.500954  341496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem
	I0317 11:20:19.500979  341496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem (1679 bytes)
	I0317 11:20:19.501029  341496 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-627203 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-627203 localhost minikube]
	I0317 11:20:19.577076  341496 provision.go:177] copyRemoteCerts
	I0317 11:20:19.577143  341496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 11:20:19.577187  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:19.594134  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:19.688036  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 11:20:19.710326  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0317 11:20:19.732614  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 11:20:19.753945  341496 provision.go:87] duration metric: took 270.590449ms to configureAuth
	I0317 11:20:19.753968  341496 ubuntu.go:193] setting minikube options for container-runtime
	I0317 11:20:19.754118  341496 config.go:182] Loaded profile config "default-k8s-diff-port-627203": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:20:19.754128  341496 machine.go:96] duration metric: took 3.744035437s to provisionDockerMachine
	I0317 11:20:19.754134  341496 client.go:171] duration metric: took 9.612615756s to LocalClient.Create
	I0317 11:20:19.754154  341496 start.go:167] duration metric: took 9.612671271s to libmachine.API.Create "default-k8s-diff-port-627203"
	I0317 11:20:19.754161  341496 start.go:293] postStartSetup for "default-k8s-diff-port-627203" (driver="docker")
	I0317 11:20:19.754175  341496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 11:20:19.754215  341496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 11:20:19.754250  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:19.771203  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:19.872391  341496 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 11:20:19.875550  341496 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0317 11:20:19.875582  341496 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0317 11:20:19.875595  341496 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0317 11:20:19.875607  341496 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0317 11:20:19.875635  341496 filesync.go:126] Scanning /home/jenkins/minikube-integration/20535-4918/.minikube/addons for local assets ...
	I0317 11:20:19.875698  341496 filesync.go:126] Scanning /home/jenkins/minikube-integration/20535-4918/.minikube/files for local assets ...
	I0317 11:20:19.875804  341496 filesync.go:149] local asset: /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem -> 116902.pem in /etc/ssl/certs
	I0317 11:20:19.875917  341496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 11:20:19.883445  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem --> /etc/ssl/certs/116902.pem (1708 bytes)
	I0317 11:20:19.905732  341496 start.go:296] duration metric: took 151.558516ms for postStartSetup
	I0317 11:20:19.906060  341496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-627203
	I0317 11:20:19.925755  341496 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/config.json ...
	I0317 11:20:19.926020  341496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 11:20:19.926086  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:19.944770  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:20.036185  341496 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0317 11:20:20.040344  341496 start.go:128] duration metric: took 9.901332366s to createHost
	I0317 11:20:20.040365  341496 start.go:83] releasing machines lock for "default-k8s-diff-port-627203", held for 9.901455126s
	I0317 11:20:20.040424  341496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-627203
	I0317 11:20:20.057945  341496 ssh_runner.go:195] Run: cat /version.json
	I0317 11:20:20.057987  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:20.058044  341496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 11:20:20.058110  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:20.077893  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:20.078299  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:20.248043  341496 ssh_runner.go:195] Run: systemctl --version
	I0317 11:20:20.252422  341496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0317 11:20:20.256698  341496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0317 11:20:20.280151  341496 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0317 11:20:20.280205  341496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 11:20:20.303739  341496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0317 11:20:20.303757  341496 start.go:495] detecting cgroup driver to use...
	I0317 11:20:20.303795  341496 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0317 11:20:20.303871  341496 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 11:20:20.314490  341496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 11:20:20.323921  341496 docker.go:217] disabling cri-docker service (if available) ...
	I0317 11:20:20.323964  341496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 11:20:20.336961  341496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 11:20:20.348981  341496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 11:20:20.427755  341496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 11:20:20.507541  341496 docker.go:233] disabling docker service ...
	I0317 11:20:20.507615  341496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 11:20:20.525433  341496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 11:20:20.536350  341496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 11:20:20.601585  341496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 11:20:20.666739  341496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 11:20:20.677294  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 11:20:20.692169  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 11:20:20.700729  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 11:20:20.709826  341496 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 11:20:20.709888  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 11:20:20.718738  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:20:20.727842  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 11:20:20.736960  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:20:20.745738  341496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 11:20:20.753974  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 11:20:20.762628  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 11:20:20.770887  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 11:20:20.779873  341496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 11:20:20.787306  341496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 11:20:20.794585  341496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:20:20.857244  341496 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 11:20:20.962615  341496 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0317 11:20:20.962696  341496 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0317 11:20:20.966342  341496 start.go:563] Will wait 60s for crictl version
	I0317 11:20:20.966394  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:20:20.969458  341496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 11:20:21.000301  341496 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.25
	RuntimeApiVersion:  v1
	I0317 11:20:21.000364  341496 ssh_runner.go:195] Run: containerd --version
	I0317 11:20:21.021585  341496 ssh_runner.go:195] Run: containerd --version
	I0317 11:20:21.045298  341496 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
	I0317 11:20:21.046823  341496 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-627203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 11:20:21.063998  341496 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0317 11:20:21.067681  341496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:20:21.078036  341496 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-627203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-627203 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 11:20:21.078155  341496 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:20:21.078215  341496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 11:20:21.110394  341496 containerd.go:627] all images are preloaded for containerd runtime.
	I0317 11:20:21.110416  341496 containerd.go:534] Images already preloaded, skipping extraction
	I0317 11:20:21.110471  341496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 11:20:21.147039  341496 containerd.go:627] all images are preloaded for containerd runtime.
	I0317 11:20:21.147059  341496 cache_images.go:84] Images are preloaded, skipping loading
	I0317 11:20:21.147072  341496 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.32.2 containerd true true} ...
	I0317 11:20:21.147182  341496 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-627203 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-627203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 11:20:21.147245  341496 ssh_runner.go:195] Run: sudo crictl info
	I0317 11:20:21.180368  341496 cni.go:84] Creating CNI manager for ""
	I0317 11:20:21.180402  341496 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 11:20:21.180417  341496 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 11:20:21.180451  341496 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-627203 NodeName:default-k8s-diff-port-627203 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 11:20:21.180598  341496 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-627203"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 11:20:21.180676  341496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 11:20:21.189167  341496 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 11:20:21.189222  341496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 11:20:21.197091  341496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0317 11:20:21.212836  341496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 11:20:21.228613  341496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2318 bytes)
	I0317 11:20:21.244235  341496 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0317 11:20:21.247449  341496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:20:21.257029  341496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:20:21.331412  341496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:20:21.344658  341496 certs.go:68] Setting up /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203 for IP: 192.168.76.2
	I0317 11:20:21.344685  341496 certs.go:194] generating shared ca certs ...
	I0317 11:20:21.344706  341496 certs.go:226] acquiring lock for ca certs: {Name:mkf58624c63680e02907d28348d45986283847c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.344852  341496 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20535-4918/.minikube/ca.key
	I0317 11:20:21.344888  341496 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.key
	I0317 11:20:21.344900  341496 certs.go:256] generating profile certs ...
	I0317 11:20:21.344967  341496 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.key
	I0317 11:20:21.344994  341496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.crt with IP's: []
	I0317 11:20:21.433063  341496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.crt ...
	I0317 11:20:21.433090  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.crt: {Name:mk081d27f47a46e83ef42cd529ab90efa4a42374 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.433242  341496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.key ...
	I0317 11:20:21.433256  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.key: {Name:mk3ff3f97f5b6d17c55106167353f358e3be7b97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.433330  341496 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key.0ed8e3f2
	I0317 11:20:21.433345  341496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt.0ed8e3f2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0317 11:20:21.695664  341496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt.0ed8e3f2 ...
	I0317 11:20:21.695695  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt.0ed8e3f2: {Name:mk7442ef755923abf17c70bd38ce4a38e38e6b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.695884  341496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key.0ed8e3f2 ...
	I0317 11:20:21.695904  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key.0ed8e3f2: {Name:mke8376d0935665b80188d48fe43b8e5b8ff6f80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.695977  341496 certs.go:381] copying /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt.0ed8e3f2 -> /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt
	I0317 11:20:21.696069  341496 certs.go:385] copying /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key.0ed8e3f2 -> /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key
	I0317 11:20:21.696166  341496 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.key
	I0317 11:20:21.696189  341496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.crt with IP's: []
	I0317 11:20:21.791034  341496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.crt ...
	I0317 11:20:21.791067  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.crt: {Name:mk96f99fc08821936606db2cdde9f87f27d42fb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.791243  341496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.key ...
	I0317 11:20:21.791284  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.key: {Name:mk0e9ec0c366cd0af025f90a833ba1e60d673556 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.791492  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690.pem (1338 bytes)
	W0317 11:20:21.791525  341496 certs.go:480] ignoring /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690_empty.pem, impossibly tiny 0 bytes
	I0317 11:20:21.791536  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 11:20:21.791559  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem (1082 bytes)
	I0317 11:20:21.791585  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem (1123 bytes)
	I0317 11:20:21.791609  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem (1679 bytes)
	I0317 11:20:21.791644  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem (1708 bytes)
	I0317 11:20:21.792251  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 11:20:21.814842  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 11:20:21.836814  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 11:20:21.860128  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 11:20:21.881562  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0317 11:20:21.903421  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 11:20:21.928625  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 11:20:21.951436  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0317 11:20:21.974719  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690.pem --> /usr/share/ca-certificates/11690.pem (1338 bytes)
	I0317 11:20:21.998103  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem --> /usr/share/ca-certificates/116902.pem (1708 bytes)
	I0317 11:20:22.019954  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 11:20:22.042505  341496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 11:20:22.058914  341496 ssh_runner.go:195] Run: openssl version
	I0317 11:20:22.064354  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116902.pem && ln -fs /usr/share/ca-certificates/116902.pem /etc/ssl/certs/116902.pem"
	I0317 11:20:22.073425  341496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116902.pem
	I0317 11:20:22.076909  341496 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:32 /usr/share/ca-certificates/116902.pem
	I0317 11:20:22.076964  341496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116902.pem
	I0317 11:20:22.084480  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116902.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 11:20:22.094200  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 11:20:22.103020  341496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:20:22.106304  341496 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:20:22.106414  341496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:20:22.112757  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 11:20:22.121663  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11690.pem && ln -fs /usr/share/ca-certificates/11690.pem /etc/ssl/certs/11690.pem"
	I0317 11:20:22.130150  341496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11690.pem
	I0317 11:20:22.133632  341496 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:32 /usr/share/ca-certificates/11690.pem
	I0317 11:20:22.133685  341496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11690.pem
	I0317 11:20:22.140348  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11690.pem /etc/ssl/certs/51391683.0"
	I0317 11:20:22.148875  341496 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 11:20:22.151896  341496 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 11:20:22.151951  341496 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-627203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-627203 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:20:22.152020  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0317 11:20:22.152054  341496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 11:20:22.184980  341496 cri.go:89] found id: ""
	I0317 11:20:22.185043  341496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 11:20:22.193505  341496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 11:20:22.201849  341496 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0317 11:20:22.201930  341496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 11:20:22.210091  341496 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 11:20:22.210113  341496 kubeadm.go:157] found existing configuration files:
	
	I0317 11:20:22.210163  341496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0317 11:20:22.218192  341496 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 11:20:22.218255  341496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 11:20:22.226657  341496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0317 11:20:22.239638  341496 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 11:20:22.239694  341496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 11:20:22.247616  341496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0317 11:20:22.256388  341496 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 11:20:22.256448  341496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 11:20:22.264706  341496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0317 11:20:22.272518  341496 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 11:20:22.272585  341496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 11:20:22.281056  341496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0317 11:20:22.333597  341496 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0317 11:20:22.333966  341496 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0317 11:20:22.389918  341496 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 11:20:31.555534  341496 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 11:20:31.555624  341496 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 11:20:31.555753  341496 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0317 11:20:31.555806  341496 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0317 11:20:31.555879  341496 kubeadm.go:310] OS: Linux
	I0317 11:20:31.555963  341496 kubeadm.go:310] CGROUPS_CPU: enabled
	I0317 11:20:31.556040  341496 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0317 11:20:31.556116  341496 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0317 11:20:31.556186  341496 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0317 11:20:31.556263  341496 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0317 11:20:31.556356  341496 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0317 11:20:31.556406  341496 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0317 11:20:31.556449  341496 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0317 11:20:31.556490  341496 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0317 11:20:31.556550  341496 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 11:20:31.556678  341496 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 11:20:31.556827  341496 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 11:20:31.556924  341496 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 11:20:31.558772  341496 out.go:235]   - Generating certificates and keys ...
	I0317 11:20:31.558886  341496 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 11:20:31.558955  341496 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 11:20:31.559017  341496 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 11:20:31.559068  341496 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 11:20:31.559146  341496 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 11:20:31.559215  341496 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 11:20:31.559342  341496 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 11:20:31.559507  341496 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-627203 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0317 11:20:31.559566  341496 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 11:20:31.559687  341496 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-627203 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0317 11:20:31.559743  341496 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 11:20:31.559836  341496 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 11:20:31.559913  341496 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 11:20:31.560004  341496 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 11:20:31.560089  341496 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 11:20:31.560182  341496 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 11:20:31.560271  341496 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 11:20:31.560363  341496 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 11:20:31.560437  341496 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 11:20:31.560547  341496 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 11:20:31.560619  341496 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 11:20:31.561976  341496 out.go:235]   - Booting up control plane ...
	I0317 11:20:31.562075  341496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 11:20:31.562146  341496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 11:20:31.562203  341496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 11:20:31.562291  341496 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 11:20:31.562370  341496 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 11:20:31.562404  341496 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 11:20:31.562526  341496 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 11:20:31.562631  341496 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 11:20:31.562686  341496 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.585498ms
	I0317 11:20:31.562756  341496 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 11:20:31.562810  341496 kubeadm.go:310] [api-check] The API server is healthy after 5.001640951s
	I0317 11:20:31.562926  341496 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 11:20:31.563043  341496 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 11:20:31.563096  341496 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 11:20:31.563308  341496 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-627203 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 11:20:31.563370  341496 kubeadm.go:310] [bootstrap-token] Using token: cynw4v.vidupn9uwbpkry9q
	I0317 11:20:31.565344  341496 out.go:235]   - Configuring RBAC rules ...
	I0317 11:20:31.565438  341496 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 11:20:31.565516  341496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 11:20:31.565649  341496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 11:20:31.565854  341496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 11:20:31.565999  341496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 11:20:31.566087  341496 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 11:20:31.566197  341496 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 11:20:31.566250  341496 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 11:20:31.566293  341496 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 11:20:31.566298  341496 kubeadm.go:310] 
	I0317 11:20:31.566370  341496 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 11:20:31.566379  341496 kubeadm.go:310] 
	I0317 11:20:31.566477  341496 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 11:20:31.566484  341496 kubeadm.go:310] 
	I0317 11:20:31.566505  341496 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 11:20:31.566555  341496 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 11:20:31.566599  341496 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 11:20:31.566605  341496 kubeadm.go:310] 
	I0317 11:20:31.566649  341496 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 11:20:31.566655  341496 kubeadm.go:310] 
	I0317 11:20:31.566724  341496 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 11:20:31.566735  341496 kubeadm.go:310] 
	I0317 11:20:31.566814  341496 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 11:20:31.566915  341496 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 11:20:31.567023  341496 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 11:20:31.567040  341496 kubeadm.go:310] 
	I0317 11:20:31.567157  341496 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 11:20:31.567285  341496 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 11:20:31.567299  341496 kubeadm.go:310] 
	I0317 11:20:31.567400  341496 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cynw4v.vidupn9uwbpkry9q \
	I0317 11:20:31.567505  341496 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fbbd8e832ea7aa08371d4fcc88b71c8e29c98bed7a9a4feed9bf5043f7b52578 \
	I0317 11:20:31.567540  341496 kubeadm.go:310] 	--control-plane 
	I0317 11:20:31.567550  341496 kubeadm.go:310] 
	I0317 11:20:31.567675  341496 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 11:20:31.567685  341496 kubeadm.go:310] 
	I0317 11:20:31.567820  341496 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cynw4v.vidupn9uwbpkry9q \
	I0317 11:20:31.567990  341496 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fbbd8e832ea7aa08371d4fcc88b71c8e29c98bed7a9a4feed9bf5043f7b52578 
	I0317 11:20:31.568005  341496 cni.go:84] Creating CNI manager for ""
	I0317 11:20:31.568014  341496 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 11:20:31.570308  341496 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0317 11:20:31.571654  341496 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0317 11:20:31.575330  341496 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0317 11:20:31.575346  341496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0317 11:20:31.592203  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0317 11:20:31.796107  341496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 11:20:31.796185  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:31.796227  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-627203 minikube.k8s.io/updated_at=2025_03_17T11_20_31_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76 minikube.k8s.io/name=default-k8s-diff-port-627203 minikube.k8s.io/primary=true
	I0317 11:20:31.913761  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:31.913762  341496 ops.go:34] apiserver oom_adj: -16
	I0317 11:20:32.414495  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:32.914861  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:33.414784  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:33.914144  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:34.414705  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:34.913915  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:35.414122  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:35.487512  341496 kubeadm.go:1113] duration metric: took 3.691382531s to wait for elevateKubeSystemPrivileges
	I0317 11:20:35.487556  341496 kubeadm.go:394] duration metric: took 13.335608972s to StartCluster
	I0317 11:20:35.487576  341496 settings.go:142] acquiring lock: {Name:mk2a57d556efff40ccd4336229d7a78216b861f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:35.487640  341496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 11:20:35.489566  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/kubeconfig: {Name:mk686b9f6159ab958672b945ae0aa5a9c96e9ecc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:35.489774  341496 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 11:20:35.489881  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 11:20:35.489943  341496 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 11:20:35.490029  341496 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-627203"
	I0317 11:20:35.490056  341496 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-627203"
	I0317 11:20:35.490076  341496 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-627203"
	I0317 11:20:35.490078  341496 config.go:182] Loaded profile config "default-k8s-diff-port-627203": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:20:35.490098  341496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-627203"
	I0317 11:20:35.490113  341496 host.go:66] Checking if "default-k8s-diff-port-627203" exists ...
	I0317 11:20:35.490455  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:35.490636  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:35.491384  341496 out.go:177] * Verifying Kubernetes components...
	I0317 11:20:35.492644  341496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:20:35.518758  341496 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-627203"
	I0317 11:20:35.518803  341496 host.go:66] Checking if "default-k8s-diff-port-627203" exists ...
	I0317 11:20:35.519164  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:35.520182  341496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:20:35.521412  341496 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 11:20:35.521431  341496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 11:20:35.521480  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:35.546610  341496 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 11:20:35.546635  341496 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 11:20:35.546679  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:35.549777  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:35.572702  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:35.624663  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 11:20:35.637144  341496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:20:35.724225  341496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 11:20:35.825754  341496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 11:20:36.141080  341496 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0317 11:20:36.142459  341496 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-627203" to be "Ready" ...
	I0317 11:20:36.207177  341496 node_ready.go:49] node "default-k8s-diff-port-627203" has status "Ready":"True"
	I0317 11:20:36.207215  341496 node_ready.go:38] duration metric: took 64.732247ms for node "default-k8s-diff-port-627203" to be "Ready" ...
	I0317 11:20:36.207231  341496 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:20:36.211865  341496 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace to be "Ready" ...
	I0317 11:20:36.619880  341496 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0317 11:20:36.621467  341496 addons.go:514] duration metric: took 1.131519409s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0317 11:20:36.646479  341496 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-627203" context rescaled to 1 replicas
	I0317 11:20:38.217170  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:40.217729  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:42.716852  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:44.717026  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:46.717095  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:49.217150  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:51.716947  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:54.217035  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:56.217405  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:58.716755  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:00.717212  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:03.216840  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:05.716737  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:07.717290  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:10.216367  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:12.217359  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:14.717254  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:17.216553  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:19.216868  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:21.717204  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:23.717446  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:26.217593  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:28.716779  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:30.716838  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:32.717482  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:34.717607  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:37.217012  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:39.720107  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:42.217009  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:44.717014  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:47.216852  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:49.716980  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:51.717159  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:53.717199  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:56.216966  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:58.716666  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:00.716842  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:03.216854  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:05.716421  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:07.717473  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:09.717607  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:12.216801  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:14.217528  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:16.716908  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:18.717190  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:21.216745  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:23.717172  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:25.717597  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:28.216363  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:30.216624  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:32.216877  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:34.716824  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:37.217665  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:39.716973  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:41.717247  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:44.216939  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:46.716529  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:48.716994  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:51.216816  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:53.216970  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:55.716609  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:57.717480  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:00.217407  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:02.716365  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:04.716438  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:06.716987  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:09.216843  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:11.217200  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:13.218595  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:15.717196  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:18.216354  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:20.217860  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:22.716518  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:24.717213  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:27.216933  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:29.717016  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:32.216483  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:34.217018  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:36.716769  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:38.717020  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:40.717051  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:42.717588  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:45.216717  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:47.717009  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:49.718582  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:52.216980  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:54.217886  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:56.717131  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:59.217483  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:01.717240  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:04.216952  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:06.217363  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:08.717047  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:11.216816  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:13.217215  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:15.217429  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:17.717023  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:20.216953  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:22.216989  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:24.716953  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:27.217304  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:29.717972  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:32.217812  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:34.716771  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:36.216864  341496 pod_ready.go:82] duration metric: took 4m0.004958001s for pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace to be "Ready" ...
	E0317 11:24:36.216891  341496 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0317 11:24:36.216901  341496 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-zwq6r" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.218595  341496 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-zwq6r" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-zwq6r" not found
	I0317 11:24:36.218617  341496 pod_ready.go:82] duration metric: took 1.707352ms for pod "coredns-668d6bf9bc-zwq6r" in "kube-system" namespace to be "Ready" ...
	E0317 11:24:36.218628  341496 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-zwq6r" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-zwq6r" not found
	I0317 11:24:36.218636  341496 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.222286  341496 pod_ready.go:93] pod "etcd-default-k8s-diff-port-627203" in "kube-system" namespace has status "Ready":"True"
	I0317 11:24:36.222302  341496 pod_ready.go:82] duration metric: took 3.659438ms for pod "etcd-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.222314  341496 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.225705  341496 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-627203" in "kube-system" namespace has status "Ready":"True"
	I0317 11:24:36.225722  341496 pod_ready.go:82] duration metric: took 3.400096ms for pod "kube-apiserver-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.225735  341496 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.228777  341496 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-627203" in "kube-system" namespace has status "Ready":"True"
	I0317 11:24:36.228794  341496 pod_ready.go:82] duration metric: took 3.051925ms for pod "kube-controller-manager-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.228805  341496 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lxqgz" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.415375  341496 pod_ready.go:93] pod "kube-proxy-lxqgz" in "kube-system" namespace has status "Ready":"True"
	I0317 11:24:36.415396  341496 pod_ready.go:82] duration metric: took 186.584372ms for pod "kube-proxy-lxqgz" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.415406  341496 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.814949  341496 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-627203" in "kube-system" namespace has status "Ready":"True"
	I0317 11:24:36.814974  341496 pod_ready.go:82] duration metric: took 399.56185ms for pod "kube-scheduler-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.814983  341496 pod_ready.go:39] duration metric: took 4m0.60773487s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:24:36.815000  341496 api_server.go:52] waiting for apiserver process to appear ...
	I0317 11:24:36.815049  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:24:36.815111  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:24:36.850770  341496 cri.go:89] found id: "ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610"
	I0317 11:24:36.850801  341496 cri.go:89] found id: ""
	I0317 11:24:36.850811  341496 logs.go:282] 1 containers: [ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610]
	I0317 11:24:36.850864  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:36.854204  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:24:36.854262  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:24:36.887650  341496 cri.go:89] found id: "bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe"
	I0317 11:24:36.887674  341496 cri.go:89] found id: ""
	I0317 11:24:36.887682  341496 logs.go:282] 1 containers: [bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe]
	I0317 11:24:36.887732  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:36.891072  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:24:36.891141  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:24:36.924018  341496 cri.go:89] found id: ""
	I0317 11:24:36.924041  341496 logs.go:282] 0 containers: []
	W0317 11:24:36.924052  341496 logs.go:284] No container was found matching "coredns"
	I0317 11:24:36.924059  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:24:36.924132  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:24:36.957551  341496 cri.go:89] found id: "17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9"
	I0317 11:24:36.957576  341496 cri.go:89] found id: ""
	I0317 11:24:36.957585  341496 logs.go:282] 1 containers: [17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9]
	I0317 11:24:36.957640  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:36.961125  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:24:36.961193  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:24:36.995097  341496 cri.go:89] found id: "a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba"
	I0317 11:24:36.995124  341496 cri.go:89] found id: ""
	I0317 11:24:36.995135  341496 logs.go:282] 1 containers: [a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba]
	I0317 11:24:36.995183  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:36.998558  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:24:36.998615  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:24:37.030703  341496 cri.go:89] found id: "e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0"
	I0317 11:24:37.030731  341496 cri.go:89] found id: ""
	I0317 11:24:37.030741  341496 logs.go:282] 1 containers: [e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0]
	I0317 11:24:37.030824  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:37.034348  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:24:37.034410  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:24:37.066880  341496 cri.go:89] found id: ""
	I0317 11:24:37.066922  341496 logs.go:282] 0 containers: []
	W0317 11:24:37.066933  341496 logs.go:284] No container was found matching "kindnet"
	I0317 11:24:37.066950  341496 logs.go:123] Gathering logs for etcd [bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe] ...
	I0317 11:24:37.066964  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe"
	I0317 11:24:37.104789  341496 logs.go:123] Gathering logs for kube-scheduler [17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9] ...
	I0317 11:24:37.104816  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9"
	I0317 11:24:37.143054  341496 logs.go:123] Gathering logs for kube-proxy [a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba] ...
	I0317 11:24:37.143083  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba"
	I0317 11:24:37.177575  341496 logs.go:123] Gathering logs for kube-controller-manager [e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0] ...
	I0317 11:24:37.177610  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0"
	I0317 11:24:37.223926  341496 logs.go:123] Gathering logs for containerd ...
	I0317 11:24:37.223956  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:24:37.272572  341496 logs.go:123] Gathering logs for kubelet ...
	I0317 11:24:37.272600  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:24:37.363184  341496 logs.go:123] Gathering logs for dmesg ...
	I0317 11:24:37.363214  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:24:37.384660  341496 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:24:37.384687  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:24:37.470494  341496 logs.go:123] Gathering logs for kube-apiserver [ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610] ...
	I0317 11:24:37.470522  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610"
	I0317 11:24:37.510318  341496 logs.go:123] Gathering logs for container status ...
	I0317 11:24:37.510345  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:24:40.047399  341496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 11:24:40.058642  341496 api_server.go:72] duration metric: took 4m4.568840658s to wait for apiserver process to appear ...
	I0317 11:24:40.058671  341496 api_server.go:88] waiting for apiserver healthz status ...
	I0317 11:24:40.058702  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:24:40.058747  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:24:40.094399  341496 cri.go:89] found id: "ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610"
	I0317 11:24:40.094426  341496 cri.go:89] found id: ""
	I0317 11:24:40.094436  341496 logs.go:282] 1 containers: [ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610]
	I0317 11:24:40.094492  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:40.098090  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:24:40.098151  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:24:40.130616  341496 cri.go:89] found id: "bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe"
	I0317 11:24:40.130634  341496 cri.go:89] found id: ""
	I0317 11:24:40.130641  341496 logs.go:282] 1 containers: [bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe]
	I0317 11:24:40.130686  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:40.133963  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:24:40.134022  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:24:40.166714  341496 cri.go:89] found id: ""
	I0317 11:24:40.166737  341496 logs.go:282] 0 containers: []
	W0317 11:24:40.166749  341496 logs.go:284] No container was found matching "coredns"
	I0317 11:24:40.166757  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:24:40.166814  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:24:40.200402  341496 cri.go:89] found id: "17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9"
	I0317 11:24:40.200428  341496 cri.go:89] found id: ""
	I0317 11:24:40.200438  341496 logs.go:282] 1 containers: [17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9]
	I0317 11:24:40.200498  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:40.203808  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:24:40.203882  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:24:40.237218  341496 cri.go:89] found id: "a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba"
	I0317 11:24:40.237243  341496 cri.go:89] found id: ""
	I0317 11:24:40.237254  341496 logs.go:282] 1 containers: [a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba]
	I0317 11:24:40.237312  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:40.240687  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:24:40.240741  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:24:40.273296  341496 cri.go:89] found id: "e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0"
	I0317 11:24:40.273317  341496 cri.go:89] found id: ""
	I0317 11:24:40.273326  341496 logs.go:282] 1 containers: [e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0]
	I0317 11:24:40.273393  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:40.277173  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:24:40.277247  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:24:40.308698  341496 cri.go:89] found id: ""
	I0317 11:24:40.308720  341496 logs.go:282] 0 containers: []
	W0317 11:24:40.308728  341496 logs.go:284] No container was found matching "kindnet"
	I0317 11:24:40.308740  341496 logs.go:123] Gathering logs for kube-scheduler [17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9] ...
	I0317 11:24:40.308752  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9"
	I0317 11:24:40.348491  341496 logs.go:123] Gathering logs for kube-proxy [a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba] ...
	I0317 11:24:40.348522  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba"
	I0317 11:24:40.381699  341496 logs.go:123] Gathering logs for kube-controller-manager [e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0] ...
	I0317 11:24:40.381727  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0"
	I0317 11:24:40.428909  341496 logs.go:123] Gathering logs for container status ...
	I0317 11:24:40.428937  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:24:40.464242  341496 logs.go:123] Gathering logs for etcd [bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe] ...
	I0317 11:24:40.464268  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe"
	I0317 11:24:40.502434  341496 logs.go:123] Gathering logs for containerd ...
	I0317 11:24:40.502464  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:24:40.549009  341496 logs.go:123] Gathering logs for kubelet ...
	I0317 11:24:40.549038  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:24:40.645736  341496 logs.go:123] Gathering logs for dmesg ...
	I0317 11:24:40.645768  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:24:40.667061  341496 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:24:40.667089  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:24:40.747006  341496 logs.go:123] Gathering logs for kube-apiserver [ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610] ...
	I0317 11:24:40.747040  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610"
	I0317 11:24:43.287988  341496 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0317 11:24:43.291755  341496 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0317 11:24:43.292626  341496 api_server.go:141] control plane version: v1.32.2
	I0317 11:24:43.292649  341496 api_server.go:131] duration metric: took 3.233971345s to wait for apiserver health ...
	I0317 11:24:43.292656  341496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 11:24:43.292676  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:24:43.292724  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:24:43.325112  341496 cri.go:89] found id: "ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610"
	I0317 11:24:43.325137  341496 cri.go:89] found id: ""
	I0317 11:24:43.325146  341496 logs.go:282] 1 containers: [ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610]
	I0317 11:24:43.325211  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:43.328726  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:24:43.328771  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:24:43.362728  341496 cri.go:89] found id: "bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe"
	I0317 11:24:43.362753  341496 cri.go:89] found id: ""
	I0317 11:24:43.362763  341496 logs.go:282] 1 containers: [bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe]
	I0317 11:24:43.362819  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:43.367669  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:24:43.367741  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:24:43.402187  341496 cri.go:89] found id: ""
	I0317 11:24:43.402216  341496 logs.go:282] 0 containers: []
	W0317 11:24:43.402227  341496 logs.go:284] No container was found matching "coredns"
	I0317 11:24:43.402234  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:24:43.402283  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:24:43.435445  341496 cri.go:89] found id: "17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9"
	I0317 11:24:43.435466  341496 cri.go:89] found id: ""
	I0317 11:24:43.435474  341496 logs.go:282] 1 containers: [17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9]
	I0317 11:24:43.435534  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:43.438732  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:24:43.438789  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:24:43.473195  341496 cri.go:89] found id: "a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba"
	I0317 11:24:43.473225  341496 cri.go:89] found id: ""
	I0317 11:24:43.473236  341496 logs.go:282] 1 containers: [a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba]
	I0317 11:24:43.473296  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:43.476550  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:24:43.476626  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:24:43.508805  341496 cri.go:89] found id: "e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0"
	I0317 11:24:43.508825  341496 cri.go:89] found id: ""
	I0317 11:24:43.508833  341496 logs.go:282] 1 containers: [e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0]
	I0317 11:24:43.508880  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:43.512124  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:24:43.512184  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:24:43.548896  341496 cri.go:89] found id: ""
	I0317 11:24:43.548917  341496 logs.go:282] 0 containers: []
	W0317 11:24:43.548926  341496 logs.go:284] No container was found matching "kindnet"
	I0317 11:24:43.548942  341496 logs.go:123] Gathering logs for container status ...
	I0317 11:24:43.548955  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:24:43.586167  341496 logs.go:123] Gathering logs for kube-scheduler [17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9] ...
	I0317 11:24:43.586208  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9"
	I0317 11:24:43.626501  341496 logs.go:123] Gathering logs for kube-proxy [a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba] ...
	I0317 11:24:43.626537  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba"
	I0317 11:24:43.659444  341496 logs.go:123] Gathering logs for kube-controller-manager [e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0] ...
	I0317 11:24:43.659470  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0"
	I0317 11:24:43.704387  341496 logs.go:123] Gathering logs for kubelet ...
	I0317 11:24:43.704417  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:24:43.793479  341496 logs.go:123] Gathering logs for dmesg ...
	I0317 11:24:43.793516  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:24:43.813483  341496 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:24:43.813522  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:24:43.899448  341496 logs.go:123] Gathering logs for kube-apiserver [ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610] ...
	I0317 11:24:43.899483  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610"
	I0317 11:24:43.941628  341496 logs.go:123] Gathering logs for etcd [bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe] ...
	I0317 11:24:43.941659  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe"
	I0317 11:24:43.981639  341496 logs.go:123] Gathering logs for containerd ...
	I0317 11:24:43.981675  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:24:46.529924  341496 system_pods.go:59] 8 kube-system pods found
	I0317 11:24:46.529972  341496 system_pods.go:61] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:46.529982  341496 system_pods.go:61] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:46.529994  341496 system_pods.go:61] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:46.530000  341496 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:46.530008  341496 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:46.530013  341496 system_pods.go:61] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:46.530017  341496 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:46.530022  341496 system_pods.go:61] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:46.530030  341496 system_pods.go:74] duration metric: took 3.237367001s to wait for pod list to return data ...
	I0317 11:24:46.530050  341496 default_sa.go:34] waiting for default service account to be created ...
	I0317 11:24:46.532619  341496 default_sa.go:45] found service account: "default"
	I0317 11:24:46.532644  341496 default_sa.go:55] duration metric: took 2.587793ms for default service account to be created ...
	I0317 11:24:46.532654  341496 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 11:24:46.534994  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:46.535018  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:46.535023  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:46.535030  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:46.535034  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:46.535038  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:46.535041  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:46.535044  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:46.535048  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:46.535073  341496 retry.go:31] will retry after 302.98689ms: missing components: kube-dns
	I0317 11:24:46.842862  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:46.842906  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:46.842915  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:46.842933  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:46.842941  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:46.842951  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:46.842964  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:46.842970  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:46.842977  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:46.842998  341496 retry.go:31] will retry after 295.784338ms: missing components: kube-dns
	I0317 11:24:47.142622  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:47.142650  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:47.142656  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:47.142664  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:47.142667  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:47.142672  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:47.142675  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:47.142678  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:47.142682  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:47.142695  341496 retry.go:31] will retry after 329.685621ms: missing components: kube-dns
	I0317 11:24:47.476124  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:47.476155  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:47.476163  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:47.476172  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:47.476176  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:47.476183  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:47.476187  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:47.476192  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:47.476197  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:47.476218  341496 retry.go:31] will retry after 460.772013ms: missing components: kube-dns
	I0317 11:24:47.940947  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:47.940977  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:47.940983  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:47.940990  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:47.940994  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:47.941000  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:47.941005  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:47.941011  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:47.941015  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:47.941035  341496 retry.go:31] will retry after 463.179256ms: missing components: kube-dns
	I0317 11:24:48.407824  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:48.407851  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:48.407858  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:48.407866  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:48.407870  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:48.407874  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:48.407877  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:48.407881  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:48.407884  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:48.407906  341496 retry.go:31] will retry after 834.652418ms: missing components: kube-dns
	I0317 11:24:49.245717  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:49.245750  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:49.245757  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:49.245771  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:49.245776  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:49.245783  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:49.245788  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:49.245793  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:49.245797  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:49.245816  341496 retry.go:31] will retry after 764.813884ms: missing components: kube-dns
	I0317 11:24:50.014701  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:50.014734  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:50.014739  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:50.014747  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:50.014751  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:50.014755  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:50.014759  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:50.014762  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:50.014765  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:50.014779  341496 retry.go:31] will retry after 1.349545391s: missing components: kube-dns
	I0317 11:24:51.368659  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:51.368694  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:51.368699  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:51.368708  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:51.368712  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:51.368716  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:51.368722  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:51.368726  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:51.368730  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:51.368743  341496 retry.go:31] will retry after 1.382092092s: missing components: kube-dns
	I0317 11:24:52.754980  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:52.755015  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:52.755025  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:52.755041  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:52.755047  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:52.755053  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:52.755058  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:52.755063  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:52.755069  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:52.755087  341496 retry.go:31] will retry after 1.716623878s: missing components: kube-dns
	I0317 11:24:54.475907  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:54.475940  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:54.475945  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:54.475954  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:54.475958  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:54.475962  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:54.475965  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:54.475968  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:54.475973  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:54.475986  341496 retry.go:31] will retry after 2.138707569s: missing components: kube-dns
	I0317 11:24:56.618436  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:56.618470  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:56.618475  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:56.618484  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:56.618488  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:56.618495  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:56.618499  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:56.618502  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:56.618505  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:56.618517  341496 retry.go:31] will retry after 3.63528576s: missing components: kube-dns
	I0317 11:25:00.258199  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:25:00.258235  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:25:00.258241  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:25:00.258251  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:25:00.258254  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:25:00.258260  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:25:00.258263  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:25:00.258266  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:25:00.258270  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:25:00.258282  341496 retry.go:31] will retry after 4.131879021s: missing components: kube-dns
	I0317 11:25:04.395415  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:25:04.395449  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:25:04.395457  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:25:04.395468  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:25:04.395477  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:25:04.395484  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:25:04.395490  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:25:04.395494  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:25:04.395501  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:25:04.395524  341496 retry.go:31] will retry after 4.696723656s: missing components: kube-dns
	I0317 11:25:09.098999  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:25:09.099034  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:25:09.099039  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:25:09.099048  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:25:09.099051  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:25:09.099056  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:25:09.099060  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:25:09.099067  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:25:09.099084  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:25:09.099101  341496 retry.go:31] will retry after 6.54261594s: missing components: kube-dns
	I0317 11:25:15.645674  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:25:15.645707  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:25:15.645713  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:25:15.645731  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:25:15.645737  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:25:15.645741  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:25:15.645744  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:25:15.645748  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:25:15.645751  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:25:15.645765  341496 retry.go:31] will retry after 8.682977828s: missing components: kube-dns
	I0317 11:25:24.334764  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:25:24.334803  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:25:24.334812  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:25:24.334823  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:25:24.334829  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:25:24.334836  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:25:24.334841  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:25:24.334846  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:25:24.334851  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:25:24.334872  341496 retry.go:31] will retry after 8.369739081s: missing components: kube-dns
	I0317 11:25:32.710525  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:25:32.710557  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:25:32.710565  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:25:32.710573  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:25:32.710577  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:25:32.710581  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:25:32.710585  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:25:32.710588  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:25:32.710591  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:25:32.710607  341496 retry.go:31] will retry after 9.14722352s: missing components: kube-dns
	I0317 11:25:41.862777  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:25:41.862817  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:25:41.862822  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:25:41.862829  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:25:41.862833  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:25:41.862837  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:25:41.862840  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:25:41.862843  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:25:41.862846  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:25:41.862859  341496 retry.go:31] will retry after 13.233633218s: missing components: kube-dns
	I0317 11:25:55.099860  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:25:55.099896  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:25:55.099902  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:25:55.099910  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:25:55.099914  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:25:55.099919  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:25:55.099923  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:25:55.099926  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:25:55.099930  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:25:55.099944  341496 retry.go:31] will retry after 14.188953941s: missing components: kube-dns
	I0317 11:26:09.294232  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:26:09.294272  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:26:09.294277  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:26:09.294286  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:26:09.294292  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:26:09.294298  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:26:09.294303  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:26:09.294308  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:26:09.294317  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:26:09.294335  341496 retry.go:31] will retry after 24.368966059s: missing components: kube-dns
	I0317 11:26:33.667957  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:26:33.667993  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:26:33.668007  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:26:33.668018  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:26:33.668026  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:26:33.668032  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:26:33.668038  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:26:33.668047  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:26:33.668055  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:26:33.668080  341496 retry.go:31] will retry after 32.292524587s: missing components: kube-dns
	I0317 11:27:05.964207  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:27:05.964241  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:27:05.964247  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:27:05.964257  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:27:05.964263  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:27:05.964267  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:27:05.964270  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:27:05.964273  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:27:05.964277  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:27:05.964292  341496 retry.go:31] will retry after 41.950050158s: missing components: kube-dns
	I0317 11:27:47.919561  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:27:47.919605  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:27:47.919613  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:27:47.919626  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:27:47.919632  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:27:47.919638  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:27:47.919644  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:27:47.919649  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:27:47.919656  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:27:47.919674  341496 retry.go:31] will retry after 51.422565643s: missing components: kube-dns
	I0317 11:28:39.349158  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:28:39.349196  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:28:39.349208  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:28:39.349219  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:28:39.349226  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:28:39.349232  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:28:39.349236  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:28:39.349241  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:28:39.349247  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:28:39.349264  341496 retry.go:31] will retry after 1m0.161179598s: missing components: kube-dns
	I0317 11:29:39.514594  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:29:39.514631  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:29:39.514640  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:29:39.514648  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:29:39.514652  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:29:39.514655  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:29:39.514659  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:29:39.514663  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:29:39.514666  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:29:39.516738  341496 out.go:201] 
	W0317 11:29:39.517897  341496 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W0317 11:29:39.517911  341496 out.go:270] * 
	* 
	W0317 11:29:39.518696  341496 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0317 11:29:39.520058  341496 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p default-k8s-diff-port-627203 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-627203
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-627203:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f8588f5680b28fe4e4aaa94bd7222024aee6392f5ad3f15c03a39796cd0eb4c5",
	        "Created": "2025-03-17T11:20:15.416558119Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 342063,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-03-17T11:20:15.45219865Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/f8588f5680b28fe4e4aaa94bd7222024aee6392f5ad3f15c03a39796cd0eb4c5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f8588f5680b28fe4e4aaa94bd7222024aee6392f5ad3f15c03a39796cd0eb4c5/hostname",
	        "HostsPath": "/var/lib/docker/containers/f8588f5680b28fe4e4aaa94bd7222024aee6392f5ad3f15c03a39796cd0eb4c5/hosts",
	        "LogPath": "/var/lib/docker/containers/f8588f5680b28fe4e4aaa94bd7222024aee6392f5ad3f15c03a39796cd0eb4c5/f8588f5680b28fe4e4aaa94bd7222024aee6392f5ad3f15c03a39796cd0eb4c5-json.log",
	        "Name": "/default-k8s-diff-port-627203",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-627203:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-627203",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f8588f5680b28fe4e4aaa94bd7222024aee6392f5ad3f15c03a39796cd0eb4c5",
	                "LowerDir": "/var/lib/docker/overlay2/fc1ba58d1e7c869aec36c5589d579b3f1e736ba7615a8be9eb32bd5b2f4fa31f-init/diff:/var/lib/docker/overlay2/c513cb32e4b42c4b2e1258d7197e5cd39dcbb3306943490e9747416948e6aaf6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc1ba58d1e7c869aec36c5589d579b3f1e736ba7615a8be9eb32bd5b2f4fa31f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc1ba58d1e7c869aec36c5589d579b3f1e736ba7615a8be9eb32bd5b2f4fa31f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc1ba58d1e7c869aec36c5589d579b3f1e736ba7615a8be9eb32bd5b2f4fa31f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-627203",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-627203/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-627203",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-627203",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-627203",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8a1ada147c6eb5f11b7952047688cee1fba6a5521f7cc51ca3bac4fa26b3bdd2",
	            "SandboxKey": "/var/run/docker/netns/8a1ada147c6e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-627203": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:09:38:28:d8:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d46cba2c65e0e0d2001566476a1dac535f2bed2d70746ef2140734abc97bd744",
	                    "EndpointID": "8128dc62f07939ddd326131395c443e1ab974a5d4ba18b1a524a17de5d741a62",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-627203",
	                        "f8588f5680b2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-627203 -n default-k8s-diff-port-627203
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-627203 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-627203 logs -n 25: (1.073764825s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-236437 sudo iptables                       | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC | 17 Mar 25 11:29 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo                                | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC | 17 Mar 25 11:29 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo                                | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC | 17 Mar 25 11:29 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo                                | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC | 17 Mar 25 11:29 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo cat                            | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC | 17 Mar 25 11:29 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo cat                            | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC | 17 Mar 25 11:29 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo                                | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo                                | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC | 17 Mar 25 11:29 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo cat                            | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC |                     |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo docker                         | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo                                | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo                                | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC | 17 Mar 25 11:29 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo cat                            | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo cat                            | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC | 17 Mar 25 11:29 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo                                | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC | 17 Mar 25 11:29 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo                                | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC | 17 Mar 25 11:29 UTC |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo                                | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC | 17 Mar 25 11:29 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo cat                            | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC | 17 Mar 25 11:29 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo cat                            | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC | 17 Mar 25 11:29 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo                                | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC | 17 Mar 25 11:29 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo                                | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC |                     |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo                                | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC | 17 Mar 25 11:29 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo find                           | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC | 17 Mar 25 11:29 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p calico-236437 sudo crio                           | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC | 17 Mar 25 11:29 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p calico-236437                                     | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:29 UTC |                     |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 11:20:09
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 11:20:09.951775  341496 out.go:345] Setting OutFile to fd 1 ...
	I0317 11:20:09.951911  341496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:20:09.951918  341496 out.go:358] Setting ErrFile to fd 2...
	I0317 11:20:09.951924  341496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:20:09.952147  341496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
	I0317 11:20:09.952741  341496 out.go:352] Setting JSON to false
	I0317 11:20:09.954025  341496 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3703,"bootTime":1742206707,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 11:20:09.954091  341496 start.go:139] virtualization: kvm guest
	I0317 11:20:09.956439  341496 out.go:177] * [default-k8s-diff-port-627203] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 11:20:09.957897  341496 out.go:177]   - MINIKUBE_LOCATION=20535
	I0317 11:20:09.957990  341496 notify.go:220] Checking for updates...
	I0317 11:20:09.960721  341496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 11:20:09.962333  341496 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 11:20:09.963810  341496 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	I0317 11:20:09.965290  341496 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 11:20:09.966759  341496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 11:20:09.968637  341496 config.go:182] Loaded profile config "calico-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:20:09.968800  341496 config.go:182] Loaded profile config "no-preload-189670": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:20:09.968922  341496 config.go:182] Loaded profile config "old-k8s-version-702762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0317 11:20:09.969134  341496 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 11:20:09.994726  341496 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 11:20:09.994957  341496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 11:20:10.047464  341496 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-03-17 11:20:10.037717036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 11:20:10.047559  341496 docker.go:318] overlay module found
	I0317 11:20:10.049461  341496 out.go:177] * Using the docker driver based on user configuration
	I0317 11:20:10.050764  341496 start.go:297] selected driver: docker
	I0317 11:20:10.050780  341496 start.go:901] validating driver "docker" against <nil>
	I0317 11:20:10.050795  341496 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 11:20:10.051718  341496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 11:20:10.105955  341496 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-03-17 11:20:10.096342154 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 11:20:10.106128  341496 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 11:20:10.106353  341496 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 11:20:10.108473  341496 out.go:177] * Using Docker driver with root privileges
	I0317 11:20:10.109937  341496 cni.go:84] Creating CNI manager for ""
	I0317 11:20:10.110100  341496 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 11:20:10.110117  341496 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0317 11:20:10.110220  341496 start.go:340] cluster config:
	{Name:default-k8s-diff-port-627203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-627203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:20:10.111829  341496 out.go:177] * Starting "default-k8s-diff-port-627203" primary control-plane node in "default-k8s-diff-port-627203" cluster
	I0317 11:20:10.113031  341496 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0317 11:20:10.114478  341496 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0317 11:20:10.115992  341496 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:20:10.116043  341496 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0317 11:20:10.116053  341496 cache.go:56] Caching tarball of preloaded images
	I0317 11:20:10.116120  341496 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0317 11:20:10.116149  341496 preload.go:172] Found /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 11:20:10.116162  341496 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0317 11:20:10.116325  341496 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/config.json ...
	I0317 11:20:10.116351  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/config.json: {Name:mk848192ef1b40ae1077b4c3a36047479a0034b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:10.138687  341496 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0317 11:20:10.138707  341496 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0317 11:20:10.138729  341496 cache.go:230] Successfully downloaded all kic artifacts
	I0317 11:20:10.138768  341496 start.go:360] acquireMachinesLock for default-k8s-diff-port-627203: {Name:mkcbff1d84866f612a979fbe06c726407300b170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 11:20:10.138896  341496 start.go:364] duration metric: took 104.168µs to acquireMachinesLock for "default-k8s-diff-port-627203"
	I0317 11:20:10.138925  341496 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-627203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-627203 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 11:20:10.139000  341496 start.go:125] createHost starting for "" (driver="docker")
	I0317 11:20:10.141230  341496 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0317 11:20:10.141482  341496 start.go:159] libmachine.API.Create for "default-k8s-diff-port-627203" (driver="docker")
	I0317 11:20:10.141513  341496 client.go:168] LocalClient.Create starting
	I0317 11:20:10.141581  341496 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem
	I0317 11:20:10.141611  341496 main.go:141] libmachine: Decoding PEM data...
	I0317 11:20:10.141625  341496 main.go:141] libmachine: Parsing certificate...
	I0317 11:20:10.141678  341496 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem
	I0317 11:20:10.141696  341496 main.go:141] libmachine: Decoding PEM data...
	I0317 11:20:10.141706  341496 main.go:141] libmachine: Parsing certificate...
	I0317 11:20:10.142029  341496 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-627203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0317 11:20:10.160384  341496 cli_runner.go:211] docker network inspect default-k8s-diff-port-627203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0317 11:20:10.160474  341496 network_create.go:284] running [docker network inspect default-k8s-diff-port-627203] to gather additional debugging logs...
	I0317 11:20:10.160501  341496 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-627203
	W0317 11:20:10.178195  341496 cli_runner.go:211] docker network inspect default-k8s-diff-port-627203 returned with exit code 1
	I0317 11:20:10.178227  341496 network_create.go:287] error running [docker network inspect default-k8s-diff-port-627203]: docker network inspect default-k8s-diff-port-627203: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-627203 not found
	I0317 11:20:10.178241  341496 network_create.go:289] output of [docker network inspect default-k8s-diff-port-627203]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-627203 not found
	
	** /stderr **
	I0317 11:20:10.178338  341496 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 11:20:10.197679  341496 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6a2ef9d4bc68 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9a:4d:91:26:57:2c} reservation:<nil>}
	I0317 11:20:10.198624  341496 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-00bf62ef0133 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:c5:34:86:d6:21} reservation:<nil>}
	I0317 11:20:10.199639  341496 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-81e0001ceae7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:6a:cf:1c:79:e6} reservation:<nil>}
	I0317 11:20:10.200718  341496 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d24500}
	I0317 11:20:10.200739  341496 network_create.go:124] attempt to create docker network default-k8s-diff-port-627203 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0317 11:20:10.200784  341496 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-627203 default-k8s-diff-port-627203
	I0317 11:20:10.255439  341496 network_create.go:108] docker network default-k8s-diff-port-627203 192.168.76.0/24 created
	I0317 11:20:10.255568  341496 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-627203" container
	I0317 11:20:10.255629  341496 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0317 11:20:10.274724  341496 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-627203 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-627203 --label created_by.minikube.sigs.k8s.io=true
	I0317 11:20:10.294680  341496 oci.go:103] Successfully created a docker volume default-k8s-diff-port-627203
	I0317 11:20:10.294772  341496 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-627203-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-627203 --entrypoint /usr/bin/test -v default-k8s-diff-port-627203:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0317 11:20:10.747828  341496 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-627203
	I0317 11:20:10.747877  341496 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:20:10.747900  341496 kic.go:194] Starting extracting preloaded images to volume ...
	I0317 11:20:10.747969  341496 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-627203:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0317 11:20:14.847118  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:14.847156  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:14.847163  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:20:14.847172  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:14.847176  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:20:14.847181  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:20:14.847184  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:20:14.847187  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:20:14.847194  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:20:14.847208  326404 retry.go:31] will retry after 10.791921859s: missing components: kube-dns
	I0317 11:20:15.344266  341496 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-627203:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.596232772s)
	I0317 11:20:15.344302  341496 kic.go:203] duration metric: took 4.596396796s to extract preloaded images to volume ...
	W0317 11:20:15.344459  341496 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0317 11:20:15.344607  341496 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0317 11:20:15.397506  341496 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-627203 --name default-k8s-diff-port-627203 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-627203 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-627203 --network default-k8s-diff-port-627203 --ip 192.168.76.2 --volume default-k8s-diff-port-627203:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0317 11:20:15.665923  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Running}}
	I0317 11:20:15.686899  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:15.706866  341496 cli_runner.go:164] Run: docker exec default-k8s-diff-port-627203 stat /var/lib/dpkg/alternatives/iptables
	I0317 11:20:15.749402  341496 oci.go:144] the created container "default-k8s-diff-port-627203" has a running status.
	I0317 11:20:15.749447  341496 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa...
	I0317 11:20:15.892302  341496 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0317 11:20:15.918468  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:15.941520  341496 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0317 11:20:15.941545  341496 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-627203 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0317 11:20:15.989310  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:16.010066  341496 machine.go:93] provisionDockerMachine start ...
	I0317 11:20:16.010194  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:16.033285  341496 main.go:141] libmachine: Using SSH client type: native
	I0317 11:20:16.033637  341496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0317 11:20:16.033665  341496 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 11:20:16.034656  341496 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46524->127.0.0.1:33103: read: connection reset by peer
	I0317 11:20:19.170824  341496 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-627203
	
	I0317 11:20:19.170859  341496 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-627203"
	I0317 11:20:19.170929  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:19.189150  341496 main.go:141] libmachine: Using SSH client type: native
	I0317 11:20:19.189434  341496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0317 11:20:19.189452  341496 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-627203 && echo "default-k8s-diff-port-627203" | sudo tee /etc/hostname
	I0317 11:20:19.334316  341496 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-627203
	
	I0317 11:20:19.334392  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:19.351482  341496 main.go:141] libmachine: Using SSH client type: native
	I0317 11:20:19.351684  341496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0317 11:20:19.351701  341496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-627203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-627203/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-627203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 11:20:19.483211  341496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 11:20:19.483289  341496 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20535-4918/.minikube CaCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20535-4918/.minikube}
	I0317 11:20:19.483331  341496 ubuntu.go:177] setting up certificates
	I0317 11:20:19.483341  341496 provision.go:84] configureAuth start
	I0317 11:20:19.483396  341496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-627203
	I0317 11:20:19.500645  341496 provision.go:143] copyHostCerts
	I0317 11:20:19.500703  341496 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem, removing ...
	I0317 11:20:19.500713  341496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem
	I0317 11:20:19.500773  341496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem (1082 bytes)
	I0317 11:20:19.500859  341496 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem, removing ...
	I0317 11:20:19.500868  341496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem
	I0317 11:20:19.500892  341496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem (1123 bytes)
	I0317 11:20:19.500946  341496 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem, removing ...
	I0317 11:20:19.500954  341496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem
	I0317 11:20:19.500979  341496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem (1679 bytes)
	I0317 11:20:19.501029  341496 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-627203 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-627203 localhost minikube]
	I0317 11:20:19.577076  341496 provision.go:177] copyRemoteCerts
	I0317 11:20:19.577143  341496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 11:20:19.577187  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:19.594134  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:19.688036  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 11:20:19.710326  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0317 11:20:19.732614  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 11:20:19.753945  341496 provision.go:87] duration metric: took 270.590449ms to configureAuth
	I0317 11:20:19.753968  341496 ubuntu.go:193] setting minikube options for container-runtime
	I0317 11:20:19.754118  341496 config.go:182] Loaded profile config "default-k8s-diff-port-627203": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:20:19.754128  341496 machine.go:96] duration metric: took 3.744035437s to provisionDockerMachine
	I0317 11:20:19.754134  341496 client.go:171] duration metric: took 9.612615756s to LocalClient.Create
	I0317 11:20:19.754154  341496 start.go:167] duration metric: took 9.612671271s to libmachine.API.Create "default-k8s-diff-port-627203"
	I0317 11:20:19.754161  341496 start.go:293] postStartSetup for "default-k8s-diff-port-627203" (driver="docker")
	I0317 11:20:19.754175  341496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 11:20:19.754215  341496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 11:20:19.754250  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:19.771203  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:19.872391  341496 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 11:20:19.875550  341496 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0317 11:20:19.875582  341496 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0317 11:20:19.875595  341496 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0317 11:20:19.875607  341496 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0317 11:20:19.875635  341496 filesync.go:126] Scanning /home/jenkins/minikube-integration/20535-4918/.minikube/addons for local assets ...
	I0317 11:20:19.875698  341496 filesync.go:126] Scanning /home/jenkins/minikube-integration/20535-4918/.minikube/files for local assets ...
	I0317 11:20:19.875804  341496 filesync.go:149] local asset: /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem -> 116902.pem in /etc/ssl/certs
	I0317 11:20:19.875917  341496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 11:20:19.883445  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem --> /etc/ssl/certs/116902.pem (1708 bytes)
	I0317 11:20:19.905732  341496 start.go:296] duration metric: took 151.558516ms for postStartSetup
	I0317 11:20:19.906060  341496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-627203
	I0317 11:20:19.925755  341496 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/config.json ...
	I0317 11:20:19.926020  341496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 11:20:19.926086  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:19.944770  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:15.751647  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:15.751680  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:15.751688  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:20:15.751699  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:15.751704  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:20:15.751711  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:20:15.751716  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:20:15.751722  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:20:15.751728  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:20:15.751750  317731 retry.go:31] will retry after 15.481083164s: missing components: kube-dns
	I0317 11:20:20.036185  341496 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0317 11:20:20.040344  341496 start.go:128] duration metric: took 9.901332366s to createHost
	I0317 11:20:20.040365  341496 start.go:83] releasing machines lock for "default-k8s-diff-port-627203", held for 9.901455126s
	I0317 11:20:20.040424  341496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-627203
	I0317 11:20:20.057945  341496 ssh_runner.go:195] Run: cat /version.json
	I0317 11:20:20.057987  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:20.058044  341496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 11:20:20.058110  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:20.077893  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:20.078299  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:20.248043  341496 ssh_runner.go:195] Run: systemctl --version
	I0317 11:20:20.252422  341496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0317 11:20:20.256698  341496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0317 11:20:20.280151  341496 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0317 11:20:20.280205  341496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 11:20:20.303739  341496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0317 11:20:20.303757  341496 start.go:495] detecting cgroup driver to use...
	I0317 11:20:20.303795  341496 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0317 11:20:20.303871  341496 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 11:20:20.314490  341496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 11:20:20.323921  341496 docker.go:217] disabling cri-docker service (if available) ...
	I0317 11:20:20.323964  341496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 11:20:20.336961  341496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 11:20:20.348981  341496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 11:20:20.427755  341496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 11:20:20.507541  341496 docker.go:233] disabling docker service ...
	I0317 11:20:20.507615  341496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 11:20:20.525433  341496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 11:20:20.536350  341496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 11:20:20.601585  341496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 11:20:20.666739  341496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 11:20:20.677294  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 11:20:20.692169  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 11:20:20.700729  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 11:20:20.709826  341496 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 11:20:20.709888  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 11:20:20.718738  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:20:20.727842  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 11:20:20.736960  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:20:20.745738  341496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 11:20:20.753974  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 11:20:20.762628  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 11:20:20.770887  341496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 11:20:20.779873  341496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 11:20:20.787306  341496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 11:20:20.794585  341496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:20:20.857244  341496 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 11:20:20.962615  341496 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0317 11:20:20.962696  341496 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0317 11:20:20.966342  341496 start.go:563] Will wait 60s for crictl version
	I0317 11:20:20.966394  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:20:20.969458  341496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 11:20:21.000301  341496 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.25
	RuntimeApiVersion:  v1
	I0317 11:20:21.000364  341496 ssh_runner.go:195] Run: containerd --version
	I0317 11:20:21.021585  341496 ssh_runner.go:195] Run: containerd --version
	I0317 11:20:21.045298  341496 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
	I0317 11:20:21.046823  341496 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-627203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 11:20:21.063998  341496 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0317 11:20:21.067681  341496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:20:21.078036  341496 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-627203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-627203 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 11:20:21.078155  341496 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 11:20:21.078215  341496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 11:20:21.110394  341496 containerd.go:627] all images are preloaded for containerd runtime.
	I0317 11:20:21.110416  341496 containerd.go:534] Images already preloaded, skipping extraction
	I0317 11:20:21.110471  341496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 11:20:21.147039  341496 containerd.go:627] all images are preloaded for containerd runtime.
	I0317 11:20:21.147059  341496 cache_images.go:84] Images are preloaded, skipping loading
	I0317 11:20:21.147072  341496 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.32.2 containerd true true} ...
	I0317 11:20:21.147182  341496 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-627203 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-627203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 11:20:21.147245  341496 ssh_runner.go:195] Run: sudo crictl info
	I0317 11:20:21.180368  341496 cni.go:84] Creating CNI manager for ""
	I0317 11:20:21.180402  341496 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 11:20:21.180417  341496 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 11:20:21.180451  341496 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-627203 NodeName:default-k8s-diff-port-627203 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 11:20:21.180598  341496 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-627203"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 11:20:21.180676  341496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 11:20:21.189167  341496 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 11:20:21.189222  341496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 11:20:21.197091  341496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0317 11:20:21.212836  341496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 11:20:21.228613  341496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2318 bytes)
	I0317 11:20:21.244235  341496 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0317 11:20:21.247449  341496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:20:21.257029  341496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:20:21.331412  341496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:20:21.344658  341496 certs.go:68] Setting up /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203 for IP: 192.168.76.2
	I0317 11:20:21.344685  341496 certs.go:194] generating shared ca certs ...
	I0317 11:20:21.344706  341496 certs.go:226] acquiring lock for ca certs: {Name:mkf58624c63680e02907d28348d45986283847c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.344852  341496 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20535-4918/.minikube/ca.key
	I0317 11:20:21.344888  341496 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.key
	I0317 11:20:21.344900  341496 certs.go:256] generating profile certs ...
	I0317 11:20:21.344967  341496 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.key
	I0317 11:20:21.344994  341496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.crt with IP's: []
	I0317 11:20:21.433063  341496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.crt ...
	I0317 11:20:21.433090  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.crt: {Name:mk081d27f47a46e83ef42cd529ab90efa4a42374 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.433242  341496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.key ...
	I0317 11:20:21.433256  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/client.key: {Name:mk3ff3f97f5b6d17c55106167353f358e3be7b97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.433330  341496 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key.0ed8e3f2
	I0317 11:20:21.433345  341496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt.0ed8e3f2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0317 11:20:21.695664  341496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt.0ed8e3f2 ...
	I0317 11:20:21.695695  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt.0ed8e3f2: {Name:mk7442ef755923abf17c70bd38ce4a38e38e6b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.695884  341496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key.0ed8e3f2 ...
	I0317 11:20:21.695904  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key.0ed8e3f2: {Name:mke8376d0935665b80188d48fe43b8e5b8ff6f80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.695977  341496 certs.go:381] copying /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt.0ed8e3f2 -> /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt
	I0317 11:20:21.696069  341496 certs.go:385] copying /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key.0ed8e3f2 -> /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key
	I0317 11:20:21.696166  341496 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.key
	I0317 11:20:21.696189  341496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.crt with IP's: []
	I0317 11:20:21.791034  341496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.crt ...
	I0317 11:20:21.791067  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.crt: {Name:mk96f99fc08821936606db2cdde9f87f27d42fb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.791243  341496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.key ...
	I0317 11:20:21.791284  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.key: {Name:mk0e9ec0c366cd0af025f90a833ba1e60d673556 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:21.791492  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690.pem (1338 bytes)
	W0317 11:20:21.791525  341496 certs.go:480] ignoring /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690_empty.pem, impossibly tiny 0 bytes
	I0317 11:20:21.791536  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 11:20:21.791559  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem (1082 bytes)
	I0317 11:20:21.791585  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem (1123 bytes)
	I0317 11:20:21.791609  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem (1679 bytes)
	I0317 11:20:21.791644  341496 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem (1708 bytes)
	I0317 11:20:21.792251  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 11:20:21.814842  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 11:20:21.836814  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 11:20:21.860128  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 11:20:21.881562  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0317 11:20:21.903421  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 11:20:21.928625  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 11:20:21.951436  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/default-k8s-diff-port-627203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0317 11:20:21.974719  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690.pem --> /usr/share/ca-certificates/11690.pem (1338 bytes)
	I0317 11:20:21.998103  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem --> /usr/share/ca-certificates/116902.pem (1708 bytes)
	I0317 11:20:22.019954  341496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 11:20:22.042505  341496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 11:20:22.058914  341496 ssh_runner.go:195] Run: openssl version
	I0317 11:20:22.064354  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116902.pem && ln -fs /usr/share/ca-certificates/116902.pem /etc/ssl/certs/116902.pem"
	I0317 11:20:22.073425  341496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116902.pem
	I0317 11:20:22.076909  341496 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:32 /usr/share/ca-certificates/116902.pem
	I0317 11:20:22.076964  341496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116902.pem
	I0317 11:20:22.084480  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116902.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 11:20:22.094200  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 11:20:22.103020  341496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:20:22.106304  341496 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:20:22.106414  341496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:20:22.112757  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 11:20:22.121663  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11690.pem && ln -fs /usr/share/ca-certificates/11690.pem /etc/ssl/certs/11690.pem"
	I0317 11:20:22.130150  341496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11690.pem
	I0317 11:20:22.133632  341496 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:32 /usr/share/ca-certificates/11690.pem
	I0317 11:20:22.133685  341496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11690.pem
	I0317 11:20:22.140348  341496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11690.pem /etc/ssl/certs/51391683.0"
	I0317 11:20:22.148875  341496 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 11:20:22.151896  341496 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 11:20:22.151951  341496 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-627203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-627203 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:20:22.152020  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0317 11:20:22.152054  341496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 11:20:22.184980  341496 cri.go:89] found id: ""
	I0317 11:20:22.185043  341496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 11:20:22.193505  341496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 11:20:22.201849  341496 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0317 11:20:22.201930  341496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 11:20:22.210091  341496 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 11:20:22.210113  341496 kubeadm.go:157] found existing configuration files:
	
	I0317 11:20:22.210163  341496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0317 11:20:22.218192  341496 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 11:20:22.218255  341496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 11:20:22.226657  341496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0317 11:20:22.239638  341496 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 11:20:22.239694  341496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 11:20:22.247616  341496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0317 11:20:22.256388  341496 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 11:20:22.256448  341496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 11:20:22.264706  341496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0317 11:20:22.272518  341496 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 11:20:22.272585  341496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 11:20:22.281056  341496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0317 11:20:22.333597  341496 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0317 11:20:22.333966  341496 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0317 11:20:22.389918  341496 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 11:20:25.643642  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:25.643677  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:25.643687  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:20:25.643701  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:25.643706  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:20:25.643713  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:20:25.643718  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:20:25.643723  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:20:25.643727  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:20:25.643744  326404 retry.go:31] will retry after 15.233092286s: missing components: kube-dns
	I0317 11:20:31.555534  341496 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 11:20:31.555624  341496 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 11:20:31.555753  341496 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0317 11:20:31.555806  341496 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0317 11:20:31.555879  341496 kubeadm.go:310] OS: Linux
	I0317 11:20:31.555963  341496 kubeadm.go:310] CGROUPS_CPU: enabled
	I0317 11:20:31.556040  341496 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0317 11:20:31.556116  341496 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0317 11:20:31.556186  341496 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0317 11:20:31.556263  341496 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0317 11:20:31.556356  341496 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0317 11:20:31.556406  341496 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0317 11:20:31.556449  341496 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0317 11:20:31.556490  341496 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0317 11:20:31.556550  341496 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 11:20:31.556678  341496 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 11:20:31.556827  341496 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 11:20:31.556924  341496 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 11:20:31.558772  341496 out.go:235]   - Generating certificates and keys ...
	I0317 11:20:31.558886  341496 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 11:20:31.558955  341496 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 11:20:31.559017  341496 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 11:20:31.559068  341496 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 11:20:31.559146  341496 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 11:20:31.559215  341496 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 11:20:31.559342  341496 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 11:20:31.559507  341496 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-627203 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0317 11:20:31.559566  341496 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 11:20:31.559687  341496 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-627203 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0317 11:20:31.559743  341496 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 11:20:31.559836  341496 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 11:20:31.559913  341496 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 11:20:31.560004  341496 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 11:20:31.560089  341496 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 11:20:31.560182  341496 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 11:20:31.560271  341496 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 11:20:31.560363  341496 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 11:20:31.560437  341496 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 11:20:31.560547  341496 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 11:20:31.560619  341496 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 11:20:31.561976  341496 out.go:235]   - Booting up control plane ...
	I0317 11:20:31.562075  341496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 11:20:31.562146  341496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 11:20:31.562203  341496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 11:20:31.562291  341496 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 11:20:31.562370  341496 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 11:20:31.562404  341496 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 11:20:31.562526  341496 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 11:20:31.562631  341496 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 11:20:31.562686  341496 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.585498ms
	I0317 11:20:31.562756  341496 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 11:20:31.562810  341496 kubeadm.go:310] [api-check] The API server is healthy after 5.001640951s
	I0317 11:20:31.562926  341496 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 11:20:31.563043  341496 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 11:20:31.563096  341496 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 11:20:31.563308  341496 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-627203 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 11:20:31.563370  341496 kubeadm.go:310] [bootstrap-token] Using token: cynw4v.vidupn9uwbpkry9q
	I0317 11:20:31.565344  341496 out.go:235]   - Configuring RBAC rules ...
	I0317 11:20:31.565438  341496 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 11:20:31.565516  341496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 11:20:31.565649  341496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 11:20:31.565854  341496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 11:20:31.565999  341496 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 11:20:31.566087  341496 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 11:20:31.566197  341496 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 11:20:31.566250  341496 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 11:20:31.566293  341496 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 11:20:31.566298  341496 kubeadm.go:310] 
	I0317 11:20:31.566370  341496 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 11:20:31.566379  341496 kubeadm.go:310] 
	I0317 11:20:31.566477  341496 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 11:20:31.566484  341496 kubeadm.go:310] 
	I0317 11:20:31.566505  341496 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 11:20:31.566555  341496 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 11:20:31.566599  341496 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 11:20:31.566605  341496 kubeadm.go:310] 
	I0317 11:20:31.566649  341496 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 11:20:31.566655  341496 kubeadm.go:310] 
	I0317 11:20:31.566724  341496 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 11:20:31.566735  341496 kubeadm.go:310] 
	I0317 11:20:31.566814  341496 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 11:20:31.566915  341496 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 11:20:31.567023  341496 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 11:20:31.567040  341496 kubeadm.go:310] 
	I0317 11:20:31.567157  341496 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 11:20:31.567285  341496 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 11:20:31.567299  341496 kubeadm.go:310] 
	I0317 11:20:31.567400  341496 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cynw4v.vidupn9uwbpkry9q \
	I0317 11:20:31.567505  341496 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fbbd8e832ea7aa08371d4fcc88b71c8e29c98bed7a9a4feed9bf5043f7b52578 \
	I0317 11:20:31.567540  341496 kubeadm.go:310] 	--control-plane 
	I0317 11:20:31.567550  341496 kubeadm.go:310] 
	I0317 11:20:31.567675  341496 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 11:20:31.567685  341496 kubeadm.go:310] 
	I0317 11:20:31.567820  341496 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cynw4v.vidupn9uwbpkry9q \
	I0317 11:20:31.567990  341496 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fbbd8e832ea7aa08371d4fcc88b71c8e29c98bed7a9a4feed9bf5043f7b52578 
	I0317 11:20:31.568005  341496 cni.go:84] Creating CNI manager for ""
	I0317 11:20:31.568014  341496 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 11:20:31.570308  341496 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0317 11:20:31.571654  341496 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0317 11:20:31.575330  341496 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0317 11:20:31.575346  341496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0317 11:20:31.592203  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0317 11:20:31.796107  341496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 11:20:31.796185  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:31.796227  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-627203 minikube.k8s.io/updated_at=2025_03_17T11_20_31_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76 minikube.k8s.io/name=default-k8s-diff-port-627203 minikube.k8s.io/primary=true
	I0317 11:20:31.913761  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:31.913762  341496 ops.go:34] apiserver oom_adj: -16
	I0317 11:20:32.414495  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:32.914861  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:33.414784  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:33.914144  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:34.414705  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:34.913915  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:35.414122  341496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:20:35.487512  341496 kubeadm.go:1113] duration metric: took 3.691382531s to wait for elevateKubeSystemPrivileges
	I0317 11:20:35.487556  341496 kubeadm.go:394] duration metric: took 13.335608972s to StartCluster
	I0317 11:20:35.487576  341496 settings.go:142] acquiring lock: {Name:mk2a57d556efff40ccd4336229d7a78216b861f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:35.487640  341496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 11:20:35.489566  341496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/kubeconfig: {Name:mk686b9f6159ab958672b945ae0aa5a9c96e9ecc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:20:35.489774  341496 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 11:20:35.489881  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 11:20:35.489943  341496 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 11:20:35.490029  341496 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-627203"
	I0317 11:20:35.490056  341496 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-627203"
	I0317 11:20:35.490076  341496 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-627203"
	I0317 11:20:35.490078  341496 config.go:182] Loaded profile config "default-k8s-diff-port-627203": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 11:20:35.490098  341496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-627203"
	I0317 11:20:35.490113  341496 host.go:66] Checking if "default-k8s-diff-port-627203" exists ...
	I0317 11:20:35.490455  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:35.490636  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:35.491384  341496 out.go:177] * Verifying Kubernetes components...
	I0317 11:20:35.492644  341496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:20:35.518758  341496 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-627203"
	I0317 11:20:35.518803  341496 host.go:66] Checking if "default-k8s-diff-port-627203" exists ...
	I0317 11:20:35.519164  341496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-627203 --format={{.State.Status}}
	I0317 11:20:35.520182  341496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:20:31.236896  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:31.236935  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:31.236944  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:20:31.236959  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:31.236964  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:20:31.236971  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:20:31.236976  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:20:31.236984  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:20:31.236990  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:20:31.237009  317731 retry.go:31] will retry after 19.261545466s: missing components: kube-dns
	I0317 11:20:35.521412  341496 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 11:20:35.521431  341496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 11:20:35.521480  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:35.546610  341496 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 11:20:35.546635  341496 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 11:20:35.546679  341496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-627203
	I0317 11:20:35.549777  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:35.572702  341496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/default-k8s-diff-port-627203/id_rsa Username:docker}
	I0317 11:20:35.624663  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 11:20:35.637144  341496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:20:35.724225  341496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 11:20:35.825754  341496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 11:20:36.141080  341496 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0317 11:20:36.142459  341496 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-627203" to be "Ready" ...
	I0317 11:20:36.207177  341496 node_ready.go:49] node "default-k8s-diff-port-627203" has status "Ready":"True"
	I0317 11:20:36.207215  341496 node_ready.go:38] duration metric: took 64.732247ms for node "default-k8s-diff-port-627203" to be "Ready" ...
	I0317 11:20:36.207231  341496 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:20:36.211865  341496 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace to be "Ready" ...
	I0317 11:20:36.619880  341496 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0317 11:20:36.621467  341496 addons.go:514] duration metric: took 1.131519409s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0317 11:20:36.646479  341496 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-627203" context rescaled to 1 replicas
	I0317 11:20:38.217170  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:40.881134  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:40.881166  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:40.881172  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:20:40.881180  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:40.881183  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:20:40.881187  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:20:40.881190  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:20:40.881194  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:20:40.881197  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:20:40.881210  326404 retry.go:31] will retry after 23.951072137s: missing components: kube-dns
	I0317 11:20:40.524557  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:20:40.524600  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:20:40.524614  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:20:40.524624  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:40.524632  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:20:40.524640  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:20:40.524649  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:20:40.524658  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:20:40.524664  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:20:40.524673  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:20:40.524693  271403 retry.go:31] will retry after 1m5.301611864s: missing components: kube-dns
	I0317 11:20:40.217729  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:42.716852  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:44.717026  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:46.717095  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:49.217150  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:50.502591  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:20:50.502629  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:20:50.502636  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:20:50.502647  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:20:50.502652  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:20:50.502658  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:20:50.502664  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:20:50.502670  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:20:50.502676  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:20:50.502696  317731 retry.go:31] will retry after 27.654906766s: missing components: kube-dns
	I0317 11:20:51.716947  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:54.217035  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:56.217405  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:20:58.716755  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:00.717212  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:03.216840  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:04.837935  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:21:04.837975  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:21:04.837986  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:21:04.837998  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:21:04.838004  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:21:04.838010  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:21:04.838016  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:21:04.838020  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:21:04.838025  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:21:04.838044  326404 retry.go:31] will retry after 29.604408571s: missing components: kube-dns
	I0317 11:21:05.716737  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:07.717290  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:10.216367  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:12.217359  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:14.717254  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:17.216553  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:19.216868  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:18.162882  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:21:18.162924  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:21:18.162931  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:21:18.162943  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:21:18.162950  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:21:18.162957  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:21:18.162963  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:21:18.162969  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:21:18.162978  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:21:18.162995  317731 retry.go:31] will retry after 25.805377541s: missing components: kube-dns
	I0317 11:21:21.717204  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:23.717446  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:26.217593  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:28.716779  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:30.716838  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:32.717482  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:34.717607  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:34.446564  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:21:34.446602  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:21:34.446609  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:21:34.446620  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:21:34.446625  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:21:34.446633  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:21:34.446637  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:21:34.446644  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:21:34.446649  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:21:34.446672  326404 retry.go:31] will retry after 39.340349632s: missing components: kube-dns
	I0317 11:21:37.217012  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:39.720107  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:42.217009  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:44.717014  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:43.975001  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:21:43.975039  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:21:43.975046  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:21:43.975057  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:21:43.975063  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:21:43.975070  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:21:43.975075  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:21:43.975082  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:21:43.975087  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:21:43.975105  317731 retry.go:31] will retry after 50.299309092s: missing components: kube-dns
	I0317 11:21:45.830506  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:21:45.830550  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:21:45.830565  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:21:45.830575  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:21:45.830582  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:21:45.830589  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:21:45.830596  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:21:45.830602  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:21:45.830612  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:21:45.830619  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:21:45.830639  271403 retry.go:31] will retry after 1m6.469274108s: missing components: kube-dns
	I0317 11:21:47.216852  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:49.716980  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:51.717159  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:53.717199  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:56.216966  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:21:58.716666  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:00.716842  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:03.216854  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:05.716421  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:07.717473  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:09.717607  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:12.216801  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:14.217528  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:13.791135  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:22:13.791174  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:22:13.791182  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:22:13.791189  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:22:13.791193  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:22:13.791198  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:22:13.791201  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:22:13.791204  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:22:13.791207  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:22:13.791221  326404 retry.go:31] will retry after 37.076286109s: missing components: kube-dns
	I0317 11:22:16.716908  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:18.717190  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:21.216745  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:23.717172  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:25.717597  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:28.216363  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:30.216624  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:32.216877  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:34.716824  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:34.281779  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:22:34.281815  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:22:34.281822  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:22:34.281830  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:22:34.281834  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:22:34.281840  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:22:34.281844  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:22:34.281848  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:22:34.281851  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:22:34.281866  317731 retry.go:31] will retry after 1m2.657088736s: missing components: kube-dns
	I0317 11:22:37.217665  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:39.716973  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:41.717247  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:44.216939  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:46.716529  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:48.716994  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:50.872276  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:22:50.872306  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:22:50.872312  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:22:50.872319  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:22:50.872323  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:22:50.872329  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:22:50.872332  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:22:50.872336  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:22:50.872339  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:22:50.872352  326404 retry.go:31] will retry after 59.664508979s: missing components: kube-dns
	I0317 11:22:52.304439  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:22:52.304483  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:22:52.304503  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:22:52.304514  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:22:52.304522  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:22:52.304529  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:22:52.304538  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:22:52.304546  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:22:52.304553  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:22:52.304559  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:22:52.304577  271403 retry.go:31] will retry after 57.75468648s: missing components: kube-dns
	I0317 11:22:51.216816  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:53.216970  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:55.716609  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:22:57.717480  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:00.217407  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:02.716365  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:04.716438  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:06.716987  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:09.216843  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:11.217200  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:13.218595  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:15.717196  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:18.216354  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:20.217860  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:22.716518  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:24.717213  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:27.216933  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:29.717016  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:32.216483  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:34.217018  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:36.716769  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:38.717020  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:36.943443  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:23:36.943481  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:23:36.943487  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:23:36.943497  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:23:36.943503  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:23:36.943509  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:23:36.943512  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:23:36.943516  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:23:36.943520  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:23:36.943538  317731 retry.go:31] will retry after 53.125754107s: missing components: kube-dns
	I0317 11:23:40.717051  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:42.717588  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:45.216717  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:47.717009  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:49.718582  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:50.542962  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:23:50.543000  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:23:50.543007  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:23:50.543017  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:23:50.543021  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:23:50.543027  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:23:50.543030  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:23:50.543034  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:23:50.543037  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:23:50.543062  326404 retry.go:31] will retry after 54.915772165s: missing components: kube-dns
	I0317 11:23:50.063088  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:23:50.063127  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:23:50.063136  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:23:50.063153  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:23:50.063159  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:23:50.063166  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:23:50.063169  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:23:50.063174  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:23:50.063177  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:23:50.063180  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:23:50.063197  271403 retry.go:31] will retry after 47.200040689s: missing components: kube-dns
	I0317 11:23:52.216980  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:54.217886  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:56.717131  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:23:59.217483  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:01.717240  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:04.216952  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:06.217363  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:08.717047  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:11.216816  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:13.217215  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:15.217429  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:17.717023  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:20.216953  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:22.216989  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:24.716953  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:27.217304  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:29.717972  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:30.074980  317731 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:30.075015  317731 system_pods.go:89] "coredns-74ff55c5b-f5872" [6446de53-94b7-40a7-a689-e22a9a58c27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:30.075021  317731 system_pods.go:89] "etcd-old-k8s-version-702762" [05734d6a-0709-4967-8e5b-68014168c603] Running
	I0317 11:24:30.075028  317731 system_pods.go:89] "kindnet-qhsp2" [57e41c3b-76bc-47e0-b204-638d30f47ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:30.075032  317731 system_pods.go:89] "kube-apiserver-old-k8s-version-702762" [986c3e59-6b47-4f64-b4a1-47cf0f0efcaa] Running
	I0317 11:24:30.075036  317731 system_pods.go:89] "kube-controller-manager-old-k8s-version-702762" [87198264-46a3-4c87-b7b3-0e553a89ff35] Running
	I0317 11:24:30.075040  317731 system_pods.go:89] "kube-proxy-l5hsd" [27f0aacc-8c1b-4dc3-ae0f-bfde7a5cdeee] Running
	I0317 11:24:30.075046  317731 system_pods.go:89] "kube-scheduler-old-k8s-version-702762" [bce266c8-ee92-4d4e-b2fb-9ff33c9d0bd2] Running
	I0317 11:24:30.075049  317731 system_pods.go:89] "storage-provisioner" [7ce62743-18a8-4064-9c99-aa4b113386a5] Running
	I0317 11:24:30.077099  317731 out.go:201] 
	W0317 11:24:30.078365  317731 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W0317 11:24:30.078387  317731 out.go:270] * 
	W0317 11:24:30.079214  317731 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0317 11:24:30.080684  317731 out.go:201] 
	I0317 11:24:32.217812  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:34.716771  341496 pod_ready.go:103] pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace has status "Ready":"False"
	I0317 11:24:37.266163  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:24:37.266199  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:24:37.266210  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:24:37.266217  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:37.266225  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:24:37.266231  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:24:37.266236  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:24:37.266245  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:24:37.266251  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:24:37.266261  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:24:37.266275  271403 retry.go:31] will retry after 51.703965946s: missing components: kube-dns
	I0317 11:24:36.216864  341496 pod_ready.go:82] duration metric: took 4m0.004958001s for pod "coredns-668d6bf9bc-tm7kk" in "kube-system" namespace to be "Ready" ...
	E0317 11:24:36.216891  341496 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0317 11:24:36.216901  341496 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-zwq6r" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.218595  341496 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-zwq6r" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-zwq6r" not found
	I0317 11:24:36.218617  341496 pod_ready.go:82] duration metric: took 1.707352ms for pod "coredns-668d6bf9bc-zwq6r" in "kube-system" namespace to be "Ready" ...
	E0317 11:24:36.218628  341496 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-zwq6r" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-zwq6r" not found
	I0317 11:24:36.218636  341496 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.222286  341496 pod_ready.go:93] pod "etcd-default-k8s-diff-port-627203" in "kube-system" namespace has status "Ready":"True"
	I0317 11:24:36.222302  341496 pod_ready.go:82] duration metric: took 3.659438ms for pod "etcd-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.222314  341496 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.225705  341496 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-627203" in "kube-system" namespace has status "Ready":"True"
	I0317 11:24:36.225722  341496 pod_ready.go:82] duration metric: took 3.400096ms for pod "kube-apiserver-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.225735  341496 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.228777  341496 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-627203" in "kube-system" namespace has status "Ready":"True"
	I0317 11:24:36.228794  341496 pod_ready.go:82] duration metric: took 3.051925ms for pod "kube-controller-manager-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.228805  341496 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lxqgz" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.415375  341496 pod_ready.go:93] pod "kube-proxy-lxqgz" in "kube-system" namespace has status "Ready":"True"
	I0317 11:24:36.415396  341496 pod_ready.go:82] duration metric: took 186.584372ms for pod "kube-proxy-lxqgz" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.415406  341496 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.814949  341496 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-627203" in "kube-system" namespace has status "Ready":"True"
	I0317 11:24:36.814974  341496 pod_ready.go:82] duration metric: took 399.56185ms for pod "kube-scheduler-default-k8s-diff-port-627203" in "kube-system" namespace to be "Ready" ...
	I0317 11:24:36.814983  341496 pod_ready.go:39] duration metric: took 4m0.60773487s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:24:36.815000  341496 api_server.go:52] waiting for apiserver process to appear ...
	I0317 11:24:36.815049  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:24:36.815111  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:24:36.850770  341496 cri.go:89] found id: "ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610"
	I0317 11:24:36.850801  341496 cri.go:89] found id: ""
	I0317 11:24:36.850811  341496 logs.go:282] 1 containers: [ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610]
	I0317 11:24:36.850864  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:36.854204  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:24:36.854262  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:24:36.887650  341496 cri.go:89] found id: "bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe"
	I0317 11:24:36.887674  341496 cri.go:89] found id: ""
	I0317 11:24:36.887682  341496 logs.go:282] 1 containers: [bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe]
	I0317 11:24:36.887732  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:36.891072  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:24:36.891141  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:24:36.924018  341496 cri.go:89] found id: ""
	I0317 11:24:36.924041  341496 logs.go:282] 0 containers: []
	W0317 11:24:36.924052  341496 logs.go:284] No container was found matching "coredns"
	I0317 11:24:36.924059  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:24:36.924132  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:24:36.957551  341496 cri.go:89] found id: "17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9"
	I0317 11:24:36.957576  341496 cri.go:89] found id: ""
	I0317 11:24:36.957585  341496 logs.go:282] 1 containers: [17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9]
	I0317 11:24:36.957640  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:36.961125  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:24:36.961193  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:24:36.995097  341496 cri.go:89] found id: "a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba"
	I0317 11:24:36.995124  341496 cri.go:89] found id: ""
	I0317 11:24:36.995135  341496 logs.go:282] 1 containers: [a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba]
	I0317 11:24:36.995183  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:36.998558  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:24:36.998615  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:24:37.030703  341496 cri.go:89] found id: "e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0"
	I0317 11:24:37.030731  341496 cri.go:89] found id: ""
	I0317 11:24:37.030741  341496 logs.go:282] 1 containers: [e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0]
	I0317 11:24:37.030824  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:37.034348  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:24:37.034410  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:24:37.066880  341496 cri.go:89] found id: ""
	I0317 11:24:37.066922  341496 logs.go:282] 0 containers: []
	W0317 11:24:37.066933  341496 logs.go:284] No container was found matching "kindnet"
	I0317 11:24:37.066950  341496 logs.go:123] Gathering logs for etcd [bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe] ...
	I0317 11:24:37.066964  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe"
	I0317 11:24:37.104789  341496 logs.go:123] Gathering logs for kube-scheduler [17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9] ...
	I0317 11:24:37.104816  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9"
	I0317 11:24:37.143054  341496 logs.go:123] Gathering logs for kube-proxy [a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba] ...
	I0317 11:24:37.143083  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba"
	I0317 11:24:37.177575  341496 logs.go:123] Gathering logs for kube-controller-manager [e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0] ...
	I0317 11:24:37.177610  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0"
	I0317 11:24:37.223926  341496 logs.go:123] Gathering logs for containerd ...
	I0317 11:24:37.223956  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:24:37.272572  341496 logs.go:123] Gathering logs for kubelet ...
	I0317 11:24:37.272600  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:24:37.363184  341496 logs.go:123] Gathering logs for dmesg ...
	I0317 11:24:37.363214  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:24:37.384660  341496 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:24:37.384687  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:24:37.470494  341496 logs.go:123] Gathering logs for kube-apiserver [ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610] ...
	I0317 11:24:37.470522  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610"
	I0317 11:24:37.510318  341496 logs.go:123] Gathering logs for container status ...
	I0317 11:24:37.510345  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:24:40.047399  341496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 11:24:40.058642  341496 api_server.go:72] duration metric: took 4m4.568840658s to wait for apiserver process to appear ...
	I0317 11:24:40.058671  341496 api_server.go:88] waiting for apiserver healthz status ...
	I0317 11:24:40.058702  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:24:40.058747  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:24:40.094399  341496 cri.go:89] found id: "ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610"
	I0317 11:24:40.094426  341496 cri.go:89] found id: ""
	I0317 11:24:40.094436  341496 logs.go:282] 1 containers: [ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610]
	I0317 11:24:40.094492  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:40.098090  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:24:40.098151  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:24:40.130616  341496 cri.go:89] found id: "bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe"
	I0317 11:24:40.130634  341496 cri.go:89] found id: ""
	I0317 11:24:40.130641  341496 logs.go:282] 1 containers: [bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe]
	I0317 11:24:40.130686  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:40.133963  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:24:40.134022  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:24:40.166714  341496 cri.go:89] found id: ""
	I0317 11:24:40.166737  341496 logs.go:282] 0 containers: []
	W0317 11:24:40.166749  341496 logs.go:284] No container was found matching "coredns"
	I0317 11:24:40.166757  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:24:40.166814  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:24:40.200402  341496 cri.go:89] found id: "17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9"
	I0317 11:24:40.200428  341496 cri.go:89] found id: ""
	I0317 11:24:40.200438  341496 logs.go:282] 1 containers: [17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9]
	I0317 11:24:40.200498  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:40.203808  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:24:40.203882  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:24:40.237218  341496 cri.go:89] found id: "a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba"
	I0317 11:24:40.237243  341496 cri.go:89] found id: ""
	I0317 11:24:40.237254  341496 logs.go:282] 1 containers: [a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba]
	I0317 11:24:40.237312  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:40.240687  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:24:40.240741  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:24:40.273296  341496 cri.go:89] found id: "e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0"
	I0317 11:24:40.273317  341496 cri.go:89] found id: ""
	I0317 11:24:40.273326  341496 logs.go:282] 1 containers: [e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0]
	I0317 11:24:40.273393  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:40.277173  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:24:40.277247  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:24:40.308698  341496 cri.go:89] found id: ""
	I0317 11:24:40.308720  341496 logs.go:282] 0 containers: []
	W0317 11:24:40.308728  341496 logs.go:284] No container was found matching "kindnet"
	I0317 11:24:40.308740  341496 logs.go:123] Gathering logs for kube-scheduler [17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9] ...
	I0317 11:24:40.308752  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9"
	I0317 11:24:40.348491  341496 logs.go:123] Gathering logs for kube-proxy [a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba] ...
	I0317 11:24:40.348522  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba"
	I0317 11:24:40.381699  341496 logs.go:123] Gathering logs for kube-controller-manager [e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0] ...
	I0317 11:24:40.381727  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0"
	I0317 11:24:40.428909  341496 logs.go:123] Gathering logs for container status ...
	I0317 11:24:40.428937  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:24:40.464242  341496 logs.go:123] Gathering logs for etcd [bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe] ...
	I0317 11:24:40.464268  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe"
	I0317 11:24:40.502434  341496 logs.go:123] Gathering logs for containerd ...
	I0317 11:24:40.502464  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:24:40.549009  341496 logs.go:123] Gathering logs for kubelet ...
	I0317 11:24:40.549038  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:24:40.645736  341496 logs.go:123] Gathering logs for dmesg ...
	I0317 11:24:40.645768  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:24:40.667061  341496 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:24:40.667089  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:24:40.747006  341496 logs.go:123] Gathering logs for kube-apiserver [ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610] ...
	I0317 11:24:40.747040  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610"
	I0317 11:24:43.287988  341496 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0317 11:24:43.291755  341496 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0317 11:24:43.292626  341496 api_server.go:141] control plane version: v1.32.2
	I0317 11:24:43.292649  341496 api_server.go:131] duration metric: took 3.233971345s to wait for apiserver health ...
	I0317 11:24:43.292656  341496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 11:24:43.292676  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 11:24:43.292724  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 11:24:43.325112  341496 cri.go:89] found id: "ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610"
	I0317 11:24:43.325137  341496 cri.go:89] found id: ""
	I0317 11:24:43.325146  341496 logs.go:282] 1 containers: [ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610]
	I0317 11:24:43.325211  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:43.328726  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 11:24:43.328771  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 11:24:43.362728  341496 cri.go:89] found id: "bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe"
	I0317 11:24:43.362753  341496 cri.go:89] found id: ""
	I0317 11:24:43.362763  341496 logs.go:282] 1 containers: [bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe]
	I0317 11:24:43.362819  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:43.367669  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 11:24:43.367741  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 11:24:43.402187  341496 cri.go:89] found id: ""
	I0317 11:24:43.402216  341496 logs.go:282] 0 containers: []
	W0317 11:24:43.402227  341496 logs.go:284] No container was found matching "coredns"
	I0317 11:24:43.402234  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 11:24:43.402283  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 11:24:43.435445  341496 cri.go:89] found id: "17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9"
	I0317 11:24:43.435466  341496 cri.go:89] found id: ""
	I0317 11:24:43.435474  341496 logs.go:282] 1 containers: [17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9]
	I0317 11:24:43.435534  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:43.438732  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 11:24:43.438789  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 11:24:43.473195  341496 cri.go:89] found id: "a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba"
	I0317 11:24:43.473225  341496 cri.go:89] found id: ""
	I0317 11:24:43.473236  341496 logs.go:282] 1 containers: [a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba]
	I0317 11:24:43.473296  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:43.476550  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 11:24:43.476626  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 11:24:43.508805  341496 cri.go:89] found id: "e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0"
	I0317 11:24:43.508825  341496 cri.go:89] found id: ""
	I0317 11:24:43.508833  341496 logs.go:282] 1 containers: [e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0]
	I0317 11:24:43.508880  341496 ssh_runner.go:195] Run: which crictl
	I0317 11:24:43.512124  341496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 11:24:43.512184  341496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 11:24:43.548896  341496 cri.go:89] found id: ""
	I0317 11:24:43.548917  341496 logs.go:282] 0 containers: []
	W0317 11:24:43.548926  341496 logs.go:284] No container was found matching "kindnet"
	I0317 11:24:43.548942  341496 logs.go:123] Gathering logs for container status ...
	I0317 11:24:43.548955  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 11:24:43.586167  341496 logs.go:123] Gathering logs for kube-scheduler [17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9] ...
	I0317 11:24:43.586208  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9"
	I0317 11:24:43.626501  341496 logs.go:123] Gathering logs for kube-proxy [a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba] ...
	I0317 11:24:43.626537  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba"
	I0317 11:24:43.659444  341496 logs.go:123] Gathering logs for kube-controller-manager [e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0] ...
	I0317 11:24:43.659470  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0"
	I0317 11:24:43.704387  341496 logs.go:123] Gathering logs for kubelet ...
	I0317 11:24:43.704417  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 11:24:43.793479  341496 logs.go:123] Gathering logs for dmesg ...
	I0317 11:24:43.793516  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 11:24:43.813483  341496 logs.go:123] Gathering logs for describe nodes ...
	I0317 11:24:43.813522  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 11:24:43.899448  341496 logs.go:123] Gathering logs for kube-apiserver [ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610] ...
	I0317 11:24:43.899483  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610"
	I0317 11:24:43.941628  341496 logs.go:123] Gathering logs for etcd [bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe] ...
	I0317 11:24:43.941659  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe"
	I0317 11:24:43.981639  341496 logs.go:123] Gathering logs for containerd ...
	I0317 11:24:43.981675  341496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 11:24:45.462816  326404 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:45.462854  326404 system_pods.go:89] "coredns-668d6bf9bc-nrkfd" [20fa0930-1a0e-4878-a0a6-91d0cc8a89f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:45.462861  326404 system_pods.go:89] "etcd-no-preload-189670" [c23bef33-f31d-4545-9d70-698788870a1c] Running
	I0317 11:24:45.462869  326404 system_pods.go:89] "kindnet-x964l" [73733da1-487f-4ec5-a874-944d550d90d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:45.462873  326404 system_pods.go:89] "kube-apiserver-no-preload-189670" [efe6ed94-360c-4b09-90ac-a1eb95166b70] Running
	I0317 11:24:45.462878  326404 system_pods.go:89] "kube-controller-manager-no-preload-189670" [55f80b50-63ce-4352-be0d-d0c1a2b295e8] Running
	I0317 11:24:45.462881  326404 system_pods.go:89] "kube-proxy-dw92z" [85d8ff43-d6b4-453d-b943-b6e4977b504c] Running
	I0317 11:24:45.462885  326404 system_pods.go:89] "kube-scheduler-no-preload-189670" [bcd6b509-90cb-4c52-bb00-f63e8b4e7a54] Running
	I0317 11:24:45.462888  326404 system_pods.go:89] "storage-provisioner" [1a5916fb-f30a-4b80-aa91-71e515c967a9] Running
	I0317 11:24:45.464780  326404 out.go:201] 
	W0317 11:24:45.466250  326404 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W0317 11:24:45.466269  326404 out.go:270] * 
	W0317 11:24:45.467129  326404 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0317 11:24:45.468845  326404 out.go:201] 
	I0317 11:24:46.529924  341496 system_pods.go:59] 8 kube-system pods found
	I0317 11:24:46.529972  341496 system_pods.go:61] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:46.529982  341496 system_pods.go:61] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:46.529994  341496 system_pods.go:61] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:46.530000  341496 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:46.530008  341496 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:46.530013  341496 system_pods.go:61] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:46.530017  341496 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:46.530022  341496 system_pods.go:61] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:46.530030  341496 system_pods.go:74] duration metric: took 3.237367001s to wait for pod list to return data ...
	I0317 11:24:46.530050  341496 default_sa.go:34] waiting for default service account to be created ...
	I0317 11:24:46.532619  341496 default_sa.go:45] found service account: "default"
	I0317 11:24:46.532644  341496 default_sa.go:55] duration metric: took 2.587793ms for default service account to be created ...
	I0317 11:24:46.532654  341496 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 11:24:46.534994  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:46.535018  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:46.535023  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:46.535030  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:46.535034  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:46.535038  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:46.535041  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:46.535044  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:46.535048  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:46.535073  341496 retry.go:31] will retry after 302.98689ms: missing components: kube-dns
	I0317 11:24:46.842862  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:46.842906  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:46.842915  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:46.842933  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:46.842941  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:46.842951  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:46.842964  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:46.842970  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:46.842977  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:46.842998  341496 retry.go:31] will retry after 295.784338ms: missing components: kube-dns
	I0317 11:24:47.142622  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:47.142650  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:47.142656  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:47.142664  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:47.142667  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:47.142672  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:47.142675  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:47.142678  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:47.142682  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:47.142695  341496 retry.go:31] will retry after 329.685621ms: missing components: kube-dns
	I0317 11:24:47.476124  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:47.476155  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:47.476163  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:47.476172  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:47.476176  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:47.476183  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:47.476187  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:47.476192  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:47.476197  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:47.476218  341496 retry.go:31] will retry after 460.772013ms: missing components: kube-dns
	I0317 11:24:47.940947  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:47.940977  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:47.940983  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:47.940990  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:47.940994  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:47.941000  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:47.941005  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:47.941011  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:47.941015  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:47.941035  341496 retry.go:31] will retry after 463.179256ms: missing components: kube-dns
	I0317 11:24:48.407824  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:48.407851  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:48.407858  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:48.407866  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:48.407870  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:48.407874  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:48.407877  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:48.407881  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:48.407884  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:48.407906  341496 retry.go:31] will retry after 834.652418ms: missing components: kube-dns
	I0317 11:24:49.245717  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:49.245750  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:49.245757  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:49.245771  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:49.245776  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:49.245783  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:49.245788  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:49.245793  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:49.245797  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:49.245816  341496 retry.go:31] will retry after 764.813884ms: missing components: kube-dns
	I0317 11:24:50.014701  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:50.014734  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:50.014739  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:50.014747  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:50.014751  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:50.014755  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:50.014759  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:50.014762  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:50.014765  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:50.014779  341496 retry.go:31] will retry after 1.349545391s: missing components: kube-dns
	I0317 11:24:51.368659  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:51.368694  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:51.368699  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:51.368708  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:51.368712  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:51.368716  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:51.368722  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:51.368726  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:51.368730  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:51.368743  341496 retry.go:31] will retry after 1.382092092s: missing components: kube-dns
	I0317 11:24:52.754980  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:52.755015  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:52.755025  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:52.755041  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:52.755047  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:52.755053  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:52.755058  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:52.755063  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:52.755069  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:52.755087  341496 retry.go:31] will retry after 1.716623878s: missing components: kube-dns
	I0317 11:24:54.475907  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:54.475940  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:54.475945  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:54.475954  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:54.475958  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:54.475962  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:54.475965  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:54.475968  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:54.475973  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:54.475986  341496 retry.go:31] will retry after 2.138707569s: missing components: kube-dns
	I0317 11:24:56.618436  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:24:56.618470  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:24:56.618475  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:24:56.618484  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:24:56.618488  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:24:56.618495  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:24:56.618499  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:24:56.618502  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:24:56.618505  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:24:56.618517  341496 retry.go:31] will retry after 3.63528576s: missing components: kube-dns
	I0317 11:25:00.258199  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:25:00.258235  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:25:00.258241  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:25:00.258251  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:25:00.258254  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:25:00.258260  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:25:00.258263  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:25:00.258266  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:25:00.258270  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:25:00.258282  341496 retry.go:31] will retry after 4.131879021s: missing components: kube-dns
	I0317 11:25:04.395415  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:25:04.395449  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:25:04.395457  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:25:04.395468  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:25:04.395477  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:25:04.395484  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:25:04.395490  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:25:04.395494  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:25:04.395501  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:25:04.395524  341496 retry.go:31] will retry after 4.696723656s: missing components: kube-dns
	I0317 11:25:09.098999  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:25:09.099034  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:25:09.099039  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:25:09.099048  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:25:09.099051  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:25:09.099056  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:25:09.099060  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:25:09.099067  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:25:09.099084  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:25:09.099101  341496 retry.go:31] will retry after 6.54261594s: missing components: kube-dns
	I0317 11:25:15.645674  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:25:15.645707  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:25:15.645713  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:25:15.645731  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:25:15.645737  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:25:15.645741  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:25:15.645744  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:25:15.645748  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:25:15.645751  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:25:15.645765  341496 retry.go:31] will retry after 8.682977828s: missing components: kube-dns
	I0317 11:25:24.334764  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:25:24.334803  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:25:24.334812  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:25:24.334823  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:25:24.334829  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:25:24.334836  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:25:24.334841  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:25:24.334846  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:25:24.334851  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:25:24.334872  341496 retry.go:31] will retry after 8.369739081s: missing components: kube-dns
	I0317 11:25:28.975667  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:25:28.975708  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:25:28.975723  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:25:28.975732  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:25:28.975739  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:25:28.975745  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:25:28.975750  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:25:28.975758  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:25:28.975764  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:25:28.975773  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:25:28.975793  271403 retry.go:31] will retry after 1m5.809313986s: missing components: kube-dns
	I0317 11:25:32.710525  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:25:32.710557  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:25:32.710565  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:25:32.710573  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:25:32.710577  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:25:32.710581  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:25:32.710585  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:25:32.710588  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:25:32.710591  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:25:32.710607  341496 retry.go:31] will retry after 9.14722352s: missing components: kube-dns
	I0317 11:25:41.862777  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:25:41.862817  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:25:41.862822  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:25:41.862829  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:25:41.862833  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:25:41.862837  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:25:41.862840  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:25:41.862843  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:25:41.862846  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:25:41.862859  341496 retry.go:31] will retry after 13.233633218s: missing components: kube-dns
	I0317 11:25:55.099860  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:25:55.099896  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:25:55.099902  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:25:55.099910  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:25:55.099914  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:25:55.099919  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:25:55.099923  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:25:55.099926  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:25:55.099930  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:25:55.099944  341496 retry.go:31] will retry after 14.188953941s: missing components: kube-dns
	I0317 11:26:09.294232  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:26:09.294272  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:26:09.294277  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:26:09.294286  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:26:09.294292  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:26:09.294298  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:26:09.294303  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:26:09.294308  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:26:09.294317  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:26:09.294335  341496 retry.go:31] will retry after 24.368966059s: missing components: kube-dns
	I0317 11:26:34.792964  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:26:34.792999  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:26:34.793007  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:26:34.793018  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:26:34.793023  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:26:34.793027  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:26:34.793030  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:26:34.793034  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:26:34.793037  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:26:34.793041  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:26:34.793054  271403 retry.go:31] will retry after 46.388333894s: missing components: kube-dns
	I0317 11:26:33.667957  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:26:33.667993  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:26:33.668007  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:26:33.668018  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:26:33.668026  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:26:33.668032  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:26:33.668038  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:26:33.668047  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:26:33.668055  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:26:33.668080  341496 retry.go:31] will retry after 32.292524587s: missing components: kube-dns
	I0317 11:27:05.964207  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:27:05.964241  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:27:05.964247  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:27:05.964257  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:27:05.964263  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:27:05.964267  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:27:05.964270  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:27:05.964273  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:27:05.964277  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:27:05.964292  341496 retry.go:31] will retry after 41.950050158s: missing components: kube-dns
	I0317 11:27:21.185012  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:27:21.185060  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:27:21.185074  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:27:21.185084  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:27:21.185091  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:27:21.185095  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:27:21.185099  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:27:21.185106  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:27:21.185110  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:27:21.185116  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:27:21.185130  271403 retry.go:31] will retry after 1m14.28936614s: missing components: kube-dns
	I0317 11:27:47.919561  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:27:47.919605  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:27:47.919613  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:27:47.919626  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:27:47.919632  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:27:47.919638  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:27:47.919644  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:27:47.919649  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:27:47.919656  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:27:47.919674  341496 retry.go:31] will retry after 51.422565643s: missing components: kube-dns
	I0317 11:28:35.478966  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:28:35.479004  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:28:35.479016  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:28:35.479026  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:28:35.479033  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:28:35.479040  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:28:35.479047  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:28:35.479053  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:28:35.479060  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:28:35.479067  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:28:35.479087  271403 retry.go:31] will retry after 50.356839714s: missing components: kube-dns
	I0317 11:28:39.349158  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:28:39.349196  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:28:39.349208  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:28:39.349219  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:28:39.349226  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:28:39.349232  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:28:39.349236  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:28:39.349241  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:28:39.349247  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:28:39.349264  341496 retry.go:31] will retry after 1m0.161179598s: missing components: kube-dns
	I0317 11:29:25.840490  271403 system_pods.go:86] 9 kube-system pods found
	I0317 11:29:25.840529  271403 system_pods.go:89] "calico-kube-controllers-77969b7d87-pv6sc" [4df3efb8-5a9a-418a-a31f-192993dce75a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0317 11:29:25.840539  271403 system_pods.go:89] "calico-node-ks7vr" [95689a9d-0dbc-4987-b8dc-3d091fdb6867] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0317 11:29:25.840547  271403 system_pods.go:89] "coredns-668d6bf9bc-zd9kj" [b0dc4e68-e11b-40e3-b9c6-fca23a609989] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:29:25.840551  271403 system_pods.go:89] "etcd-calico-236437" [158a815c-9849-4fea-af76-7c6fbc9c0c1b] Running
	I0317 11:29:25.840557  271403 system_pods.go:89] "kube-apiserver-calico-236437" [6d30e649-878e-49b5-911b-3ce54ae4c2aa] Running
	I0317 11:29:25.840560  271403 system_pods.go:89] "kube-controller-manager-calico-236437" [ecfd2ffb-e783-4adc-9c2c-b58307e43ac0] Running
	I0317 11:29:25.840565  271403 system_pods.go:89] "kube-proxy-ntqtp" [03ec4108-ddad-456c-9d67-fed13e813ea5] Running
	I0317 11:29:25.840568  271403 system_pods.go:89] "kube-scheduler-calico-236437" [67dbfbf1-eee2-4c38-9af5-adfd5925979c] Running
	I0317 11:29:25.840572  271403 system_pods.go:89] "storage-provisioner" [1bc25a90-7158-4497-bbda-c2aa423138ff] Running
	I0317 11:29:25.842964  271403 out.go:201] 
	W0317 11:29:25.844361  271403 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W0317 11:29:25.844380  271403 out.go:270] * 
	W0317 11:29:25.845232  271403 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0317 11:29:25.846427  271403 out.go:201] 
	I0317 11:29:39.514594  341496 system_pods.go:86] 8 kube-system pods found
	I0317 11:29:39.514631  341496 system_pods.go:89] "coredns-668d6bf9bc-tm7kk" [b1b35fe8-579c-409e-90f6-8b7b10d4f29a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 11:29:39.514640  341496 system_pods.go:89] "etcd-default-k8s-diff-port-627203" [624f9cde-39fa-4bec-8392-5cbe904c4957] Running
	I0317 11:29:39.514648  341496 system_pods.go:89] "kindnet-q6mbv" [fba06093-89a5-4fd7-a387-ba646a61b908] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 11:29:39.514652  341496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-627203" [36f40333-b8b9-4fb4-b2d6-f606a567306a] Running
	I0317 11:29:39.514655  341496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-627203" [f709b8c3-8ec2-4dc4-b8e4-21036cd0906c] Running
	I0317 11:29:39.514659  341496 system_pods.go:89] "kube-proxy-lxqgz" [efd8af03-68f8-4c6e-aad2-297adc236791] Running
	I0317 11:29:39.514663  341496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-627203" [e2a40b35-1d4d-45b6-9f91-a530af387e40] Running
	I0317 11:29:39.514666  341496 system_pods.go:89] "storage-provisioner" [7fc92246-f952-4c35-b421-baaeaeabc587] Running
	I0317 11:29:39.516738  341496 out.go:201] 
	W0317 11:29:39.517897  341496 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W0317 11:29:39.517911  341496 out.go:270] * 
	W0317 11:29:39.518696  341496 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0317 11:29:39.520058  341496 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	849bab07baebc       6e38f40d628db       9 minutes ago       Running             storage-provisioner       0                   9f6b61f6afcad       storage-provisioner
	a8dd2a6252978       f1332858868e1       9 minutes ago       Running             kube-proxy                0                   348cd29964154       kube-proxy-lxqgz
	ecda5c2361090       85b7a174738ba       9 minutes ago       Running             kube-apiserver            0                   b3eca918d2e6d       kube-apiserver-default-k8s-diff-port-627203
	17c98b1fc52e8       d8e673e7c9983       9 minutes ago       Running             kube-scheduler            0                   2ff2a0e04809b       kube-scheduler-default-k8s-diff-port-627203
	e8e51714016cc       b6a454c5a800d       9 minutes ago       Running             kube-controller-manager   0                   e3368eea01f94       kube-controller-manager-default-k8s-diff-port-627203
	bee8f6be705ab       a9e7e6b294baf       9 minutes ago       Running             etcd                      0                   91333b3d03353       etcd-default-k8s-diff-port-627203
	
	
	==> containerd <==
	Mar 17 11:26:58 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:26:58.837630876Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"02ae2806dfb8436750b0fa08b27256779cc34df73d81f9f47a0febd22fe90fa4\": failed to find network info for sandbox \"02ae2806dfb8436750b0fa08b27256779cc34df73d81f9f47a0febd22fe90fa4\""
	Mar 17 11:27:12 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:27:12.816221481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:27:12 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:27:12.836078315Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ef17820ba2715e2be5b7bf7bcb3cb8604335f2028a7fdb037b39da87dfe37cdd\": failed to find network info for sandbox \"ef17820ba2715e2be5b7bf7bcb3cb8604335f2028a7fdb037b39da87dfe37cdd\""
	Mar 17 11:27:23 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:27:23.816001179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:27:23 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:27:23.835329104Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"95d86ec8ca370e1ec171e1d61a488f86f7af87af4c52ea3e270ecbbb8e329184\": failed to find network info for sandbox \"95d86ec8ca370e1ec171e1d61a488f86f7af87af4c52ea3e270ecbbb8e329184\""
	Mar 17 11:27:38 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:27:38.815932638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:27:38 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:27:38.836707106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8dfe13d61d2d3805205a5dbf7c9c59a60a664fd167ec89c6c4b3958db3a9ad86\": failed to find network info for sandbox \"8dfe13d61d2d3805205a5dbf7c9c59a60a664fd167ec89c6c4b3958db3a9ad86\""
	Mar 17 11:27:53 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:27:53.815617421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:27:53 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:27:53.833736903Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dcfe5938bb87581f7d58d40cc7952c5b1a11941689b0516a3a958c7b34dbbdc7\": failed to find network info for sandbox \"dcfe5938bb87581f7d58d40cc7952c5b1a11941689b0516a3a958c7b34dbbdc7\""
	Mar 17 11:28:04 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:28:04.815648014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:28:04 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:28:04.834497656Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ce7062f71e0fd7cfb8f46b9b94eb94ee6680836794d789c8a55ce70089e62398\": failed to find network info for sandbox \"ce7062f71e0fd7cfb8f46b9b94eb94ee6680836794d789c8a55ce70089e62398\""
	Mar 17 11:28:17 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:28:17.816085395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:28:17 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:28:17.833824185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf3f1ebd864790a3237190788c1c746b20bd12217ac03ef71e172ae9fe328416\": failed to find network info for sandbox \"cf3f1ebd864790a3237190788c1c746b20bd12217ac03ef71e172ae9fe328416\""
	Mar 17 11:28:31 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:28:31.816189446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:28:31 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:28:31.838252610Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0669d98f951d592db25884489274dcf0d3dc68ee1e7970dcbce4be955691f756\": failed to find network info for sandbox \"0669d98f951d592db25884489274dcf0d3dc68ee1e7970dcbce4be955691f756\""
	Mar 17 11:28:44 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:28:44.816162168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:28:44 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:28:44.836335731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2deb0257ae89f57c453781f6adb3d44e932dab092e967585baedb5491ee9a8d2\": failed to find network info for sandbox \"2deb0257ae89f57c453781f6adb3d44e932dab092e967585baedb5491ee9a8d2\""
	Mar 17 11:28:55 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:28:55.815686222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:28:55 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:28:55.834146283Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3cc3d5727fde9c9bd19929dd03effd12743041d589a38fd7c84de6a135e97289\": failed to find network info for sandbox \"3cc3d5727fde9c9bd19929dd03effd12743041d589a38fd7c84de6a135e97289\""
	Mar 17 11:29:06 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:29:06.815840008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:29:06 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:29:06.835450520Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"73fd0f961a68e145b104155dded490b8bece0ce657ccc7d28c208f80c720df17\": failed to find network info for sandbox \"73fd0f961a68e145b104155dded490b8bece0ce657ccc7d28c208f80c720df17\""
	Mar 17 11:29:21 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:29:21.815730537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:29:21 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:29:21.833999459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a0aea2244f8547d36097058adcb56dc7276125bc00011321ba4b28766456e3f1\": failed to find network info for sandbox \"a0aea2244f8547d36097058adcb56dc7276125bc00011321ba4b28766456e3f1\""
	Mar 17 11:29:32 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:29:32.816685910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,}"
	Mar 17 11:29:32 default-k8s-diff-port-627203 containerd[881]: time="2025-03-17T11:29:32.838984384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tm7kk,Uid:b1b35fe8-579c-409e-90f6-8b7b10d4f29a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f338a297f130859f273a7fb36df56a4e222e95470c23d3fa7742f0262a74b719\": failed to find network info for sandbox \"f338a297f130859f273a7fb36df56a4e222e95470c23d3fa7742f0262a74b719\""
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-627203
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-627203
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76
	                    minikube.k8s.io/name=default-k8s-diff-port-627203
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_03_17T11_20_31_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 11:20:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-627203
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 11:29:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 11:24:55 +0000   Mon, 17 Mar 2025 11:20:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 11:24:55 +0000   Mon, 17 Mar 2025 11:20:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 11:24:55 +0000   Mon, 17 Mar 2025 11:20:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 11:24:55 +0000   Mon, 17 Mar 2025 11:20:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-627203
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	System Info:
	  Machine ID:                 d38907cfee9b41c1b4e5a4af031b66f6
	  System UUID:                1f59b58a-0ef4-4330-b83f-f70cbd04f1ed
	  Boot ID:                    6cdff8eb-9dff-46dc-b46a-15af38578335
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.25
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-tm7kk                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m5s
	  kube-system                 etcd-default-k8s-diff-port-627203                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m9s
	  kube-system                 kindnet-q6mbv                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m5s
	  kube-system                 kube-apiserver-default-k8s-diff-port-627203             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m9s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-627203    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m9s
	  kube-system                 kube-proxy-lxqgz                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m5s
	  kube-system                 kube-scheduler-default-k8s-diff-port-627203             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m9s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m3s                   kube-proxy       
	  Normal   NodeAllocatableEnforced  9m15s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 9m15s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m15s (x8 over 9m15s)  kubelet          Node default-k8s-diff-port-627203 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m15s (x8 over 9m15s)  kubelet          Node default-k8s-diff-port-627203 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m15s (x7 over 9m15s)  kubelet          Node default-k8s-diff-port-627203 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m15s                  kubelet          Starting kubelet.
	  Normal   Starting                 9m10s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m10s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  9m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9m9s                   kubelet          Node default-k8s-diff-port-627203 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m9s                   kubelet          Node default-k8s-diff-port-627203 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m9s                   kubelet          Node default-k8s-diff-port-627203 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m6s                   node-controller  Node default-k8s-diff-port-627203 event: Registered Node default-k8s-diff-port-627203 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2a 9f 34 c1 3c 2d 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ea db 01 46 f3 5d 08 06
	[Mar17 11:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 03 06 1a ae 04 08 06
	[Mar17 11:11] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ba d0 41 5a 57 08 06
	[  +0.000337] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 03 06 1a ae 04 08 06
	[ +43.804696] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 68 f0 20 09 1d 08 06
	[  +0.014204] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 35 88 eb 1a ca 08 06
	[Mar17 11:12] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 40 5e e0 f5 10 08 06
	[  +0.000328] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff da 68 f0 20 09 1d 08 06
	[Mar17 11:13] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9e 9d fa 19 03 e5 08 06
	[  +0.000467] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7a 6b 3f 12 54 e7 08 06
	[Mar17 11:14] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 15 0b 3c 2b d0 08 06
	[  +0.000401] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7a 6b 3f 12 54 e7 08 06
	
	
	==> etcd [bee8f6be705ab5c03f4bcde7fc34053c7a1da22517cc888e41630502630b84fe] <==
	{"level":"info","ts":"2025-03-17T11:20:25.945035Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-03-17T11:20:25.945390Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-03-17T11:20:25.945431Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-03-17T11:20:25.945517Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-03-17T11:20:25.945536Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-03-17T11:20:26.926965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-03-17T11:20:26.927014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-03-17T11:20:26.927044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-03-17T11:20:26.927060Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-03-17T11:20:26.927070Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-03-17T11:20:26.927079Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-03-17T11:20:26.927100Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-03-17T11:20:26.927748Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T11:20:26.928411Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T11:20:26.928390Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T11:20:26.928409Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-diff-port-627203 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-03-17T11:20:26.928526Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T11:20:26.928855Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T11:20:26.929050Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-03-17T11:20:26.929115Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-03-17T11:20:26.929673Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T11:20:26.930490Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-03-17T11:20:26.930587Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T11:20:26.931292Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T11:20:26.932019Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:29:40 up  1:11,  0 users,  load average: 0.92, 0.76, 1.05
	Linux default-k8s-diff-port-627203 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [ecda5c23610904e9019ab46f8f8e6946ae147095bebf920e87875cd8cab83610] <==
	I0317 11:20:28.305532       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0317 11:20:28.305949       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0317 11:20:28.306085       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0317 11:20:28.306640       1 aggregator.go:171] initial CRD sync complete...
	I0317 11:20:28.306669       1 autoregister_controller.go:144] Starting autoregister controller
	I0317 11:20:28.306676       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0317 11:20:28.306682       1 cache.go:39] Caches are synced for autoregister controller
	I0317 11:20:28.308247       1 controller.go:615] quota admission added evaluator for: namespaces
	I0317 11:20:28.322782       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0317 11:20:28.354021       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0317 11:20:29.169880       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0317 11:20:29.173740       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0317 11:20:29.173760       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0317 11:20:29.606039       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0317 11:20:29.639842       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0317 11:20:29.713622       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0317 11:20:29.719720       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0317 11:20:29.720778       1 controller.go:615] quota admission added evaluator for: endpoints
	I0317 11:20:29.724350       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0317 11:20:30.231050       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0317 11:20:30.961449       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0317 11:20:30.970648       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0317 11:20:30.977567       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0317 11:20:35.710534       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0317 11:20:35.814884       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [e8e51714016ccbcf2864558f916990a73df5f67320b85ec2302b5d334bccaac0] <==
	I0317 11:20:34.784991       1 shared_informer.go:320] Caches are synced for resource quota
	I0317 11:20:34.784994       1 shared_informer.go:320] Caches are synced for namespace
	I0317 11:20:34.786205       1 shared_informer.go:320] Caches are synced for daemon sets
	I0317 11:20:34.788399       1 shared_informer.go:320] Caches are synced for node
	I0317 11:20:34.788448       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0317 11:20:34.788480       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0317 11:20:34.788498       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0317 11:20:34.788516       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0317 11:20:34.794502       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-627203" podCIDRs=["10.244.0.0/24"]
	I0317 11:20:34.794538       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-627203"
	I0317 11:20:34.794572       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-627203"
	I0317 11:20:34.800806       1 shared_informer.go:320] Caches are synced for garbage collector
	I0317 11:20:35.615661       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-627203"
	I0317 11:20:36.009154       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="295.22449ms"
	I0317 11:20:36.022969       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.739ms"
	I0317 11:20:36.023088       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="76.92µs"
	I0317 11:20:36.031621       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="64.15µs"
	I0317 11:20:36.311060       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="86.169319ms"
	I0317 11:20:36.318382       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="7.260815ms"
	I0317 11:20:36.318473       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="56.776µs"
	I0317 11:20:37.859725       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="64.195µs"
	I0317 11:20:37.865143       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="82.863µs"
	I0317 11:20:37.868196       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="85.196µs"
	I0317 11:20:41.301473       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-627203"
	I0317 11:24:55.713214       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-627203"
	
	
	==> kube-proxy [a8dd2a6252978083fec07007afde09b5c4512ec3d5d131e5c7b2bd2df768c6ba] <==
	I0317 11:20:36.560625       1 server_linux.go:66] "Using iptables proxy"
	I0317 11:20:36.727175       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.76.2"]
	E0317 11:20:36.727237       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0317 11:20:36.747267       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0317 11:20:36.747338       1 server_linux.go:170] "Using iptables Proxier"
	I0317 11:20:36.749285       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0317 11:20:36.749700       1 server.go:497] "Version info" version="v1.32.2"
	I0317 11:20:36.749739       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 11:20:36.751415       1 config.go:199] "Starting service config controller"
	I0317 11:20:36.751474       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0317 11:20:36.751437       1 config.go:105] "Starting endpoint slice config controller"
	I0317 11:20:36.751586       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0317 11:20:36.751665       1 config.go:329] "Starting node config controller"
	I0317 11:20:36.751677       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0317 11:20:36.851723       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0317 11:20:36.851776       1 shared_informer.go:320] Caches are synced for service config
	I0317 11:20:36.851994       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [17c98b1fc52e823b33030fb9bb3069d8e9e2bba487d07ae65f5ef93516f872e9] <==
	W0317 11:20:28.331543       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0317 11:20:28.331567       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0317 11:20:28.331591       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0317 11:20:28.331623       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:20:28.331624       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0317 11:20:28.331637       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0317 11:20:28.331660       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0317 11:20:28.331662       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:20:28.331705       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0317 11:20:28.331790       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 11:20:28.331808       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0317 11:20:28.331855       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 11:20:28.331953       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0317 11:20:28.332001       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0317 11:20:29.173443       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0317 11:20:29.173491       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0317 11:20:29.203904       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0317 11:20:29.203944       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:20:29.228765       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0317 11:20:29.228827       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 11:20:29.310418       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0317 11:20:29.310461       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0317 11:20:29.357692       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0317 11:20:29.357745       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0317 11:20:31.625569       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 17 11:28:44 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:28:44.836616    1638 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2deb0257ae89f57c453781f6adb3d44e932dab092e967585baedb5491ee9a8d2\": failed to find network info for sandbox \"2deb0257ae89f57c453781f6adb3d44e932dab092e967585baedb5491ee9a8d2\""
	Mar 17 11:28:44 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:28:44.836691    1638 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2deb0257ae89f57c453781f6adb3d44e932dab092e967585baedb5491ee9a8d2\": failed to find network info for sandbox \"2deb0257ae89f57c453781f6adb3d44e932dab092e967585baedb5491ee9a8d2\"" pod="kube-system/coredns-668d6bf9bc-tm7kk"
	Mar 17 11:28:44 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:28:44.836714    1638 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2deb0257ae89f57c453781f6adb3d44e932dab092e967585baedb5491ee9a8d2\": failed to find network info for sandbox \"2deb0257ae89f57c453781f6adb3d44e932dab092e967585baedb5491ee9a8d2\"" pod="kube-system/coredns-668d6bf9bc-tm7kk"
	Mar 17 11:28:44 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:28:44.836759    1638 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tm7kk_kube-system(b1b35fe8-579c-409e-90f6-8b7b10d4f29a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tm7kk_kube-system(b1b35fe8-579c-409e-90f6-8b7b10d4f29a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2deb0257ae89f57c453781f6adb3d44e932dab092e967585baedb5491ee9a8d2\\\": failed to find network info for sandbox \\\"2deb0257ae89f57c453781f6adb3d44e932dab092e967585baedb5491ee9a8d2\\\"\"" pod="kube-system/coredns-668d6bf9bc-tm7kk" podUID="b1b35fe8-579c-409e-90f6-8b7b10d4f29a"
	Mar 17 11:28:45 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:28:45.815956    1638 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-q6mbv" podUID="fba06093-89a5-4fd7-a387-ba646a61b908"
	Mar 17 11:28:55 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:28:55.834425    1638 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cc3d5727fde9c9bd19929dd03effd12743041d589a38fd7c84de6a135e97289\": failed to find network info for sandbox \"3cc3d5727fde9c9bd19929dd03effd12743041d589a38fd7c84de6a135e97289\""
	Mar 17 11:28:55 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:28:55.834498    1638 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cc3d5727fde9c9bd19929dd03effd12743041d589a38fd7c84de6a135e97289\": failed to find network info for sandbox \"3cc3d5727fde9c9bd19929dd03effd12743041d589a38fd7c84de6a135e97289\"" pod="kube-system/coredns-668d6bf9bc-tm7kk"
	Mar 17 11:28:55 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:28:55.834521    1638 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cc3d5727fde9c9bd19929dd03effd12743041d589a38fd7c84de6a135e97289\": failed to find network info for sandbox \"3cc3d5727fde9c9bd19929dd03effd12743041d589a38fd7c84de6a135e97289\"" pod="kube-system/coredns-668d6bf9bc-tm7kk"
	Mar 17 11:28:55 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:28:55.834567    1638 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tm7kk_kube-system(b1b35fe8-579c-409e-90f6-8b7b10d4f29a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tm7kk_kube-system(b1b35fe8-579c-409e-90f6-8b7b10d4f29a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3cc3d5727fde9c9bd19929dd03effd12743041d589a38fd7c84de6a135e97289\\\": failed to find network info for sandbox \\\"3cc3d5727fde9c9bd19929dd03effd12743041d589a38fd7c84de6a135e97289\\\"\"" pod="kube-system/coredns-668d6bf9bc-tm7kk" podUID="b1b35fe8-579c-409e-90f6-8b7b10d4f29a"
	Mar 17 11:28:59 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:28:59.816112    1638 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-q6mbv" podUID="fba06093-89a5-4fd7-a387-ba646a61b908"
	Mar 17 11:29:06 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:29:06.835719    1638 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73fd0f961a68e145b104155dded490b8bece0ce657ccc7d28c208f80c720df17\": failed to find network info for sandbox \"73fd0f961a68e145b104155dded490b8bece0ce657ccc7d28c208f80c720df17\""
	Mar 17 11:29:06 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:29:06.835797    1638 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73fd0f961a68e145b104155dded490b8bece0ce657ccc7d28c208f80c720df17\": failed to find network info for sandbox \"73fd0f961a68e145b104155dded490b8bece0ce657ccc7d28c208f80c720df17\"" pod="kube-system/coredns-668d6bf9bc-tm7kk"
	Mar 17 11:29:06 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:29:06.835830    1638 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73fd0f961a68e145b104155dded490b8bece0ce657ccc7d28c208f80c720df17\": failed to find network info for sandbox \"73fd0f961a68e145b104155dded490b8bece0ce657ccc7d28c208f80c720df17\"" pod="kube-system/coredns-668d6bf9bc-tm7kk"
	Mar 17 11:29:06 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:29:06.835906    1638 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tm7kk_kube-system(b1b35fe8-579c-409e-90f6-8b7b10d4f29a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tm7kk_kube-system(b1b35fe8-579c-409e-90f6-8b7b10d4f29a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73fd0f961a68e145b104155dded490b8bece0ce657ccc7d28c208f80c720df17\\\": failed to find network info for sandbox \\\"73fd0f961a68e145b104155dded490b8bece0ce657ccc7d28c208f80c720df17\\\"\"" pod="kube-system/coredns-668d6bf9bc-tm7kk" podUID="b1b35fe8-579c-409e-90f6-8b7b10d4f29a"
	Mar 17 11:29:10 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:29:10.816653    1638 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-q6mbv" podUID="fba06093-89a5-4fd7-a387-ba646a61b908"
	Mar 17 11:29:21 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:29:21.834271    1638 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0aea2244f8547d36097058adcb56dc7276125bc00011321ba4b28766456e3f1\": failed to find network info for sandbox \"a0aea2244f8547d36097058adcb56dc7276125bc00011321ba4b28766456e3f1\""
	Mar 17 11:29:21 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:29:21.834351    1638 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0aea2244f8547d36097058adcb56dc7276125bc00011321ba4b28766456e3f1\": failed to find network info for sandbox \"a0aea2244f8547d36097058adcb56dc7276125bc00011321ba4b28766456e3f1\"" pod="kube-system/coredns-668d6bf9bc-tm7kk"
	Mar 17 11:29:21 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:29:21.834389    1638 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0aea2244f8547d36097058adcb56dc7276125bc00011321ba4b28766456e3f1\": failed to find network info for sandbox \"a0aea2244f8547d36097058adcb56dc7276125bc00011321ba4b28766456e3f1\"" pod="kube-system/coredns-668d6bf9bc-tm7kk"
	Mar 17 11:29:21 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:29:21.834457    1638 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tm7kk_kube-system(b1b35fe8-579c-409e-90f6-8b7b10d4f29a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tm7kk_kube-system(b1b35fe8-579c-409e-90f6-8b7b10d4f29a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0aea2244f8547d36097058adcb56dc7276125bc00011321ba4b28766456e3f1\\\": failed to find network info for sandbox \\\"a0aea2244f8547d36097058adcb56dc7276125bc00011321ba4b28766456e3f1\\\"\"" pod="kube-system/coredns-668d6bf9bc-tm7kk" podUID="b1b35fe8-579c-409e-90f6-8b7b10d4f29a"
	Mar 17 11:29:25 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:29:25.818334    1638 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-q6mbv" podUID="fba06093-89a5-4fd7-a387-ba646a61b908"
	Mar 17 11:29:32 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:29:32.839228    1638 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f338a297f130859f273a7fb36df56a4e222e95470c23d3fa7742f0262a74b719\": failed to find network info for sandbox \"f338a297f130859f273a7fb36df56a4e222e95470c23d3fa7742f0262a74b719\""
	Mar 17 11:29:32 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:29:32.839326    1638 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f338a297f130859f273a7fb36df56a4e222e95470c23d3fa7742f0262a74b719\": failed to find network info for sandbox \"f338a297f130859f273a7fb36df56a4e222e95470c23d3fa7742f0262a74b719\"" pod="kube-system/coredns-668d6bf9bc-tm7kk"
	Mar 17 11:29:32 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:29:32.839359    1638 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f338a297f130859f273a7fb36df56a4e222e95470c23d3fa7742f0262a74b719\": failed to find network info for sandbox \"f338a297f130859f273a7fb36df56a4e222e95470c23d3fa7742f0262a74b719\"" pod="kube-system/coredns-668d6bf9bc-tm7kk"
	Mar 17 11:29:32 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:29:32.839416    1638 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tm7kk_kube-system(b1b35fe8-579c-409e-90f6-8b7b10d4f29a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tm7kk_kube-system(b1b35fe8-579c-409e-90f6-8b7b10d4f29a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f338a297f130859f273a7fb36df56a4e222e95470c23d3fa7742f0262a74b719\\\": failed to find network info for sandbox \\\"f338a297f130859f273a7fb36df56a4e222e95470c23d3fa7742f0262a74b719\\\"\"" pod="kube-system/coredns-668d6bf9bc-tm7kk" podUID="b1b35fe8-579c-409e-90f6-8b7b10d4f29a"
	Mar 17 11:29:39 default-k8s-diff-port-627203 kubelet[1638]: E0317 11:29:39.816289    1638 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-q6mbv" podUID="fba06093-89a5-4fd7-a387-ba646a61b908"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-627203 -n default-k8s-diff-port-627203
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-627203 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-668d6bf9bc-tm7kk kindnet-q6mbv
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/FirstStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-627203 describe pod coredns-668d6bf9bc-tm7kk kindnet-q6mbv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-627203 describe pod coredns-668d6bf9bc-tm7kk kindnet-q6mbv: exit status 1 (70.884427ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-tm7kk" not found
	Error from server (NotFound): pods "kindnet-q6mbv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-627203 describe pod coredns-668d6bf9bc-tm7kk kindnet-q6mbv: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (571.67s)

                                                
                                    

Test pass (281/312)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 19.23
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.32.2/json-events 11.51
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.06
18 TestDownloadOnly/v1.32.2/DeleteAll 0.19
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.07
21 TestBinaryMirror 0.73
22 TestOffline 49.09
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 203.35
29 TestAddons/serial/Volcano 39.54
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 9.44
35 TestAddons/parallel/Registry 16.09
36 TestAddons/parallel/Ingress 18.39
37 TestAddons/parallel/InspektorGadget 10.79
38 TestAddons/parallel/MetricsServer 6.73
40 TestAddons/parallel/CSI 57.97
41 TestAddons/parallel/Headlamp 16.58
42 TestAddons/parallel/CloudSpanner 5.51
43 TestAddons/parallel/LocalPath 54.92
44 TestAddons/parallel/NvidiaDevicePlugin 6.48
45 TestAddons/parallel/Yakd 10.61
46 TestAddons/parallel/AmdGpuDevicePlugin 6.7
47 TestAddons/StoppedEnableDisable 12.1
48 TestCertOptions 24.61
49 TestCertExpiration 217.96
51 TestForceSystemdFlag 26.36
52 TestForceSystemdEnv 28.22
53 TestDockerEnvContainerd 34.83
54 TestKVMDriverInstallOrUpdate 4.75
58 TestErrorSpam/setup 18.27
59 TestErrorSpam/start 0.54
60 TestErrorSpam/status 0.84
61 TestErrorSpam/pause 1.45
62 TestErrorSpam/unpause 1.66
63 TestErrorSpam/stop 1.33
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 50.48
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.04
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.97
75 TestFunctional/serial/CacheCmd/cache/add_local 1.88
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.49
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.1
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 42.42
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.29
86 TestFunctional/serial/LogsFileCmd 1.3
87 TestFunctional/serial/InvalidService 3.98
89 TestFunctional/parallel/ConfigCmd 0.34
90 TestFunctional/parallel/DashboardCmd 10.07
91 TestFunctional/parallel/DryRun 0.37
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 0.94
97 TestFunctional/parallel/ServiceCmdConnect 7.83
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 37.24
101 TestFunctional/parallel/SSHCmd 0.55
102 TestFunctional/parallel/CpCmd 1.83
103 TestFunctional/parallel/MySQL 19.84
104 TestFunctional/parallel/FileSync 0.33
105 TestFunctional/parallel/CertSync 1.7
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
113 TestFunctional/parallel/License 0.61
114 TestFunctional/parallel/ServiceCmd/DeployApp 8.18
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
116 TestFunctional/parallel/MountCmd/any-port 18.22
117 TestFunctional/parallel/ProfileCmd/profile_list 0.4
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
119 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
120 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
121 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
122 TestFunctional/parallel/ServiceCmd/List 0.34
123 TestFunctional/parallel/ServiceCmd/JSONOutput 0.4
124 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
125 TestFunctional/parallel/ServiceCmd/Format 0.36
126 TestFunctional/parallel/ServiceCmd/URL 0.35
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.21
132 TestFunctional/parallel/MountCmd/specific-port 1.81
133 TestFunctional/parallel/MountCmd/VerifyCleanup 1.64
134 TestFunctional/parallel/Version/short 0.05
135 TestFunctional/parallel/Version/components 0.59
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.79
141 TestFunctional/parallel/ImageCommands/Setup 1.78
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.06
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.16
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.38
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.47
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.6
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 97.81
163 TestMultiControlPlane/serial/DeployApp 5.94
164 TestMultiControlPlane/serial/PingHostFromPods 0.98
165 TestMultiControlPlane/serial/AddWorkerNode 20.89
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
168 TestMultiControlPlane/serial/CopyFile 15.53
169 TestMultiControlPlane/serial/StopSecondaryNode 12.46
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.67
171 TestMultiControlPlane/serial/RestartSecondaryNode 15.68
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.83
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 131.98
174 TestMultiControlPlane/serial/DeleteSecondaryNode 9.06
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
176 TestMultiControlPlane/serial/StopCluster 35.56
177 TestMultiControlPlane/serial/RestartCluster 67.91
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
179 TestMultiControlPlane/serial/AddSecondaryNode 35.91
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
184 TestJSONOutput/start/Command 43.2
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.63
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.56
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.63
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.19
209 TestKicCustomNetwork/create_custom_network 33.42
210 TestKicCustomNetwork/use_default_bridge_network 26.1
211 TestKicExistingNetwork 24.46
212 TestKicCustomSubnet 23.32
213 TestKicStaticIP 22.61
214 TestMainNoArgs 0.05
215 TestMinikubeProfile 45.2
218 TestMountStart/serial/StartWithMountFirst 7.97
219 TestMountStart/serial/VerifyMountFirst 0.24
220 TestMountStart/serial/StartWithMountSecond 5.43
221 TestMountStart/serial/VerifyMountSecond 0.24
222 TestMountStart/serial/DeleteFirst 1.58
223 TestMountStart/serial/VerifyMountPostDelete 0.23
224 TestMountStart/serial/Stop 1.17
225 TestMountStart/serial/RestartStopped 7.63
226 TestMountStart/serial/VerifyMountPostStop 0.24
229 TestMultiNode/serial/FreshStart2Nodes 62.7
230 TestMultiNode/serial/DeployApp2Nodes 15.59
231 TestMultiNode/serial/PingHostFrom2Pods 0.67
232 TestMultiNode/serial/AddNode 15.08
233 TestMultiNode/serial/MultiNodeLabels 0.06
234 TestMultiNode/serial/ProfileList 0.61
235 TestMultiNode/serial/CopyFile 8.91
236 TestMultiNode/serial/StopNode 2.07
237 TestMultiNode/serial/StartAfterStop 8.33
238 TestMultiNode/serial/RestartKeepsNodes 77.86
239 TestMultiNode/serial/DeleteNode 4.92
240 TestMultiNode/serial/StopMultiNode 23.73
241 TestMultiNode/serial/RestartMultiNode 44.84
242 TestMultiNode/serial/ValidateNameConflict 21.7
247 TestPreload 111.61
249 TestScheduledStopUnix 94.94
252 TestInsufficientStorage 9.16
253 TestRunningBinaryUpgrade 150.11
255 TestKubernetesUpgrade 318.16
256 TestMissingContainerUpgrade 175.24
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
259 TestNoKubernetes/serial/StartWithK8s 24.92
260 TestNoKubernetes/serial/StartWithStopK8s 17.87
261 TestNoKubernetes/serial/Start 5.65
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
263 TestNoKubernetes/serial/ProfileList 0.74
264 TestNoKubernetes/serial/Stop 1.22
265 TestNoKubernetes/serial/StartNoArgs 7.48
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
274 TestNetworkPlugins/group/false 2.97
285 TestStoppedBinaryUpgrade/Setup 2.27
286 TestStoppedBinaryUpgrade/Upgrade 85.64
289 TestStoppedBinaryUpgrade/MinikubeLogs 0.94
290 TestNetworkPlugins/group/auto/Start 779.63
293 TestNetworkPlugins/group/custom-flannel/Start 36.41
294 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
295 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.18
296 TestNetworkPlugins/group/custom-flannel/DNS 0.13
297 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
298 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
299 TestNetworkPlugins/group/flannel/Start 38.8
300 TestNetworkPlugins/group/flannel/ControllerPod 6.01
301 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
302 TestNetworkPlugins/group/flannel/NetCatPod 9.17
303 TestNetworkPlugins/group/flannel/DNS 0.12
304 TestNetworkPlugins/group/flannel/Localhost 0.1
305 TestNetworkPlugins/group/flannel/HairPin 0.1
306 TestNetworkPlugins/group/bridge/Start 65.31
307 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
308 TestNetworkPlugins/group/bridge/NetCatPod 9.19
309 TestNetworkPlugins/group/bridge/DNS 0.12
310 TestNetworkPlugins/group/bridge/Localhost 0.1
311 TestNetworkPlugins/group/bridge/HairPin 0.1
312 TestNetworkPlugins/group/enable-default-cni/Start 71.09
313 TestNetworkPlugins/group/auto/KubeletFlags 0.26
314 TestNetworkPlugins/group/auto/NetCatPod 8.21
315 TestNetworkPlugins/group/auto/DNS 0.11
316 TestNetworkPlugins/group/auto/Localhost 0.09
317 TestNetworkPlugins/group/auto/HairPin 0.09
320 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
321 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.18
322 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
323 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
324 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
329 TestStartStop/group/old-k8s-version/serial/DeployApp 408.36
330 TestStartStop/group/no-preload/serial/DeployApp 421.25
332 TestStartStop/group/newest-cni/serial/FirstStart 25.06
333 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 162.28
334 TestStartStop/group/newest-cni/serial/DeployApp 0
335 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.84
336 TestStartStop/group/newest-cni/serial/Stop 1.25
337 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
338 TestStartStop/group/newest-cni/serial/SecondStart 12.64
339 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
342 TestStartStop/group/newest-cni/serial/Pause 2.65
344 TestStartStop/group/embed-certs/serial/FirstStart 40.92
345 TestStartStop/group/embed-certs/serial/DeployApp 10.27
346 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.85
347 TestStartStop/group/embed-certs/serial/Stop 12.07
348 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.79
349 TestStartStop/group/old-k8s-version/serial/Stop 11.92
350 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
351 TestStartStop/group/embed-certs/serial/SecondStart 263.02
352 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.16
353 TestStartStop/group/old-k8s-version/serial/SecondStart 131.24
354 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.02
355 TestStartStop/group/no-preload/serial/Stop 12.26
356 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
357 TestStartStop/group/no-preload/serial/SecondStart 262.02
358 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.06
359 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.08
360 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
361 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 262.02
362 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
363 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
364 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
365 TestStartStop/group/old-k8s-version/serial/Pause 2.54
366 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
367 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
368 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
369 TestStartStop/group/embed-certs/serial/Pause 2.57
370 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
371 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
372 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
373 TestStartStop/group/no-preload/serial/Pause 2.53
374 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
375 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
376 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
377 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.52
x
+
TestDownloadOnly/v1.20.0/json-events (19.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-058193 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-058193 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (19.230759853s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (19.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0317 10:25:37.097281   11690 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0317 10:25:37.097365   11690 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-058193
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-058193: exit status 85 (54.938116ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-058193 | jenkins | v1.35.0 | 17 Mar 25 10:25 UTC |          |
	|         | -p download-only-058193        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 10:25:17
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 10:25:17.903940   11702 out.go:345] Setting OutFile to fd 1 ...
	I0317 10:25:17.904188   11702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:25:17.904198   11702 out.go:358] Setting ErrFile to fd 2...
	I0317 10:25:17.904202   11702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:25:17.904395   11702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
	W0317 10:25:17.904502   11702 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20535-4918/.minikube/config/config.json: open /home/jenkins/minikube-integration/20535-4918/.minikube/config/config.json: no such file or directory
	I0317 10:25:17.905064   11702 out.go:352] Setting JSON to true
	I0317 10:25:17.905918   11702 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":411,"bootTime":1742206707,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 10:25:17.905974   11702 start.go:139] virtualization: kvm guest
	I0317 10:25:17.908360   11702 out.go:97] [download-only-058193] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0317 10:25:17.908458   11702 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball: no such file or directory
	I0317 10:25:17.908489   11702 notify.go:220] Checking for updates...
	I0317 10:25:17.909745   11702 out.go:169] MINIKUBE_LOCATION=20535
	I0317 10:25:17.910986   11702 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 10:25:17.912206   11702 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 10:25:17.913340   11702 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	I0317 10:25:17.914501   11702 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0317 10:25:17.916574   11702 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0317 10:25:17.916739   11702 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 10:25:17.938115   11702 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 10:25:17.938175   11702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 10:25:18.257830   11702 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-03-17 10:25:18.249837922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 10:25:18.257937   11702 docker.go:318] overlay module found
	I0317 10:25:18.259489   11702 out.go:97] Using the docker driver based on user configuration
	I0317 10:25:18.259518   11702 start.go:297] selected driver: docker
	I0317 10:25:18.259532   11702 start.go:901] validating driver "docker" against <nil>
	I0317 10:25:18.259628   11702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 10:25:18.307023   11702 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-03-17 10:25:18.297321098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 10:25:18.307195   11702 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 10:25:18.307695   11702 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0317 10:25:18.307868   11702 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0317 10:25:18.309486   11702 out.go:169] Using Docker driver with root privileges
	I0317 10:25:18.310642   11702 cni.go:84] Creating CNI manager for ""
	I0317 10:25:18.310713   11702 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 10:25:18.310729   11702 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0317 10:25:18.310792   11702 start.go:340] cluster config:
	{Name:download-only-058193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-058193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 10:25:18.311954   11702 out.go:97] Starting "download-only-058193" primary control-plane node in "download-only-058193" cluster
	I0317 10:25:18.311970   11702 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0317 10:25:18.312969   11702 out.go:97] Pulling base image v0.0.46-1741860993-20523 ...
	I0317 10:25:18.312990   11702 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0317 10:25:18.313129   11702 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0317 10:25:18.328375   11702 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 to local cache
	I0317 10:25:18.328522   11702 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory
	I0317 10:25:18.328600   11702 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 to local cache
	I0317 10:25:18.470876   11702 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0317 10:25:18.470911   11702 cache.go:56] Caching tarball of preloaded images
	I0317 10:25:18.471057   11702 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0317 10:25:18.472920   11702 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0317 10:25:18.472935   11702 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0317 10:25:18.572402   11702 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0317 10:25:29.688511   11702 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0317 10:25:29.688607   11702 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0317 10:25:30.603638   11702 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0317 10:25:30.603962   11702 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/download-only-058193/config.json ...
	I0317 10:25:30.603989   11702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/download-only-058193/config.json: {Name:mkb7ce4f6c90e030ff4ac7a90ce1233b75bd001a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 10:25:30.604136   11702 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0317 10:25:30.604285   11702 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20535-4918/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-058193 host does not exist
	  To start a cluster, run: "minikube start -p download-only-058193"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-058193
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (11.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-487382 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-487382 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.505761693s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (11.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0317 10:25:48.972630   11690 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0317 10:25:48.972675   11690 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-487382
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-487382: exit status 85 (56.115989ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-058193 | jenkins | v1.35.0 | 17 Mar 25 10:25 UTC |                     |
	|         | -p download-only-058193        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 17 Mar 25 10:25 UTC | 17 Mar 25 10:25 UTC |
	| delete  | -p download-only-058193        | download-only-058193 | jenkins | v1.35.0 | 17 Mar 25 10:25 UTC | 17 Mar 25 10:25 UTC |
	| start   | -o=json --download-only        | download-only-487382 | jenkins | v1.35.0 | 17 Mar 25 10:25 UTC |                     |
	|         | -p download-only-487382        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 10:25:37
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 10:25:37.503009   12094 out.go:345] Setting OutFile to fd 1 ...
	I0317 10:25:37.503299   12094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:25:37.503310   12094 out.go:358] Setting ErrFile to fd 2...
	I0317 10:25:37.503316   12094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:25:37.503528   12094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
	I0317 10:25:37.504069   12094 out.go:352] Setting JSON to true
	I0317 10:25:37.504884   12094 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":431,"bootTime":1742206707,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 10:25:37.504980   12094 start.go:139] virtualization: kvm guest
	I0317 10:25:37.506731   12094 out.go:97] [download-only-487382] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 10:25:37.506848   12094 notify.go:220] Checking for updates...
	I0317 10:25:37.508217   12094 out.go:169] MINIKUBE_LOCATION=20535
	I0317 10:25:37.509353   12094 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 10:25:37.510460   12094 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 10:25:37.511489   12094 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	I0317 10:25:37.512883   12094 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0317 10:25:37.514812   12094 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0317 10:25:37.515083   12094 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 10:25:37.536151   12094 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 10:25:37.536234   12094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 10:25:37.579736   12094 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-03-17 10:25:37.571509009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 10:25:37.579894   12094 docker.go:318] overlay module found
	I0317 10:25:37.581447   12094 out.go:97] Using the docker driver based on user configuration
	I0317 10:25:37.581476   12094 start.go:297] selected driver: docker
	I0317 10:25:37.581487   12094 start.go:901] validating driver "docker" against <nil>
	I0317 10:25:37.581575   12094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 10:25:37.625999   12094 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-03-17 10:25:37.61845551 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 10:25:37.626154   12094 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 10:25:37.626584   12094 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0317 10:25:37.626707   12094 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0317 10:25:37.628495   12094 out.go:169] Using Docker driver with root privileges
	I0317 10:25:37.629784   12094 cni.go:84] Creating CNI manager for ""
	I0317 10:25:37.629851   12094 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 10:25:37.629861   12094 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0317 10:25:37.629925   12094 start.go:340] cluster config:
	{Name:download-only-487382 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-487382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 10:25:37.631166   12094 out.go:97] Starting "download-only-487382" primary control-plane node in "download-only-487382" cluster
	I0317 10:25:37.631185   12094 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0317 10:25:37.632249   12094 out.go:97] Pulling base image v0.0.46-1741860993-20523 ...
	I0317 10:25:37.632274   12094 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 10:25:37.632360   12094 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0317 10:25:37.648729   12094 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 to local cache
	I0317 10:25:37.648828   12094 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory
	I0317 10:25:37.648843   12094 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory, skipping pull
	I0317 10:25:37.648847   12094 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in cache, skipping pull
	I0317 10:25:37.648856   12094 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 as a tarball
	I0317 10:25:38.077901   12094 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0317 10:25:38.077931   12094 cache.go:56] Caching tarball of preloaded images
	I0317 10:25:38.078108   12094 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 10:25:38.079889   12094 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0317 10:25:38.079908   12094 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 ...
	I0317 10:25:38.178518   12094 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:17ec4d97c92604221650726c3857ee2a -> /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0317 10:25:47.391726   12094 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 ...
	I0317 10:25:47.391833   12094 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-487382 host does not exist
	  To start a cluster, run: "minikube start -p download-only-487382"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-487382
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.07s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-377524 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-377524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-377524
--- PASS: TestDownloadOnlyKic (1.07s)

                                                
                                    
x
+
TestBinaryMirror (0.73s)

                                                
                                                
=== RUN   TestBinaryMirror
I0317 10:25:50.657485   11690 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-003319 --alsologtostderr --binary-mirror http://127.0.0.1:45899 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-003319" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-003319
--- PASS: TestBinaryMirror (0.73s)

                                                
                                    
x
+
TestOffline (49.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-382101 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-382101 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (44.173812597s)
helpers_test.go:175: Cleaning up "offline-containerd-382101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-382101
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-382101: (4.911143455s)
--- PASS: TestOffline (49.09s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-712202
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-712202: exit status 85 (49.465856ms)

                                                
                                                
-- stdout --
	* Profile "addons-712202" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-712202"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-712202
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-712202: exit status 85 (50.193117ms)

                                                
                                                
-- stdout --
	* Profile "addons-712202" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-712202"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (203.35s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-712202 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-712202 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m23.350063572s)
--- PASS: TestAddons/Setup (203.35s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.54s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:807: volcano-scheduler stabilized in 10.206849ms
addons_test.go:815: volcano-admission stabilized in 10.260492ms
addons_test.go:823: volcano-controller stabilized in 10.318203ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-fghgk" [18a36011-300b-4ae7-a01c-b90e23b34ad5] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003019713s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-h4t7b" [ffc1c7e6-e859-4b7b-85b3-bd81ad275054] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00355672s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-g6qbx" [38d0a463-9438-453a-b997-2ff8abe8f9e1] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003781782s
addons_test.go:842: (dbg) Run:  kubectl --context addons-712202 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-712202 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-712202 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [5101cb00-8d75-43b5-88fa-95612cd11e55] Pending
helpers_test.go:344: "test-job-nginx-0" [5101cb00-8d75-43b5-88fa-95612cd11e55] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [5101cb00-8d75-43b5-88fa-95612cd11e55] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003368133s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-712202 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-712202 addons disable volcano --alsologtostderr -v=1: (11.21279395s)
--- PASS: TestAddons/serial/Volcano (39.54s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-712202 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-712202 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-712202 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-712202 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e6af59f9-8bf0-4fe9-9b92-b3b3e8728139] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e6af59f9-8bf0-4fe9-9b92-b3b3e8728139] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003491811s
addons_test.go:633: (dbg) Run:  kubectl --context addons-712202 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-712202 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-712202 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.44s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.472579ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-9qj2x" [22ccbd47-dc6a-4425-8a21-c96932e8a0f5] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002428747s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kj4dz" [ee0b9696-1eca-4dbe-ad56-7aeb09dcd03b] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003203746s
addons_test.go:331: (dbg) Run:  kubectl --context addons-712202 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-712202 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-712202 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.177130362s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-712202 ip
2025/03/17 10:30:28 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-712202 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.09s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-712202 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-712202 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-712202 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [cea0adff-afa2-4bdb-ae4c-a1b688dbb31c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [cea0adff-afa2-4bdb-ae4c-a1b688dbb31c] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.004200254s
I0317 10:30:27.911076   11690 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-712202 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-712202 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-712202 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-712202 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-712202 addons disable ingress-dns --alsologtostderr -v=1: (1.110613826s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-712202 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-712202 addons disable ingress --alsologtostderr -v=1: (7.832121098s)
--- PASS: TestAddons/parallel/Ingress (18.39s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-b8fvs" [4ce0bc2b-93bc-472b-a24f-a37b300391a5] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003952857s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-712202 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-712202 addons disable inspektor-gadget --alsologtostderr -v=1: (5.782730499s)
--- PASS: TestAddons/parallel/InspektorGadget (10.79s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.73s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.068794ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-gwdk7" [9d15a295-5bcf-42b1-96c3-b92f292d33f6] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.002519993s
addons_test.go:402: (dbg) Run:  kubectl --context addons-712202 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-712202 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.73s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.97s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0317 10:30:28.836245   11690 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0317 10:30:28.839699   11690 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0317 10:30:28.839726   11690 kapi.go:107] duration metric: took 3.485561ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 3.498597ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-712202 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-712202 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [cddeb845-ac61-4ddb-9ba9-71672dc01127] Pending
helpers_test.go:344: "task-pv-pod" [cddeb845-ac61-4ddb-9ba9-71672dc01127] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [cddeb845-ac61-4ddb-9ba9-71672dc01127] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003048526s
addons_test.go:511: (dbg) Run:  kubectl --context addons-712202 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-712202 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-712202 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-712202 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-712202 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-712202 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-712202 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b9486710-4793-4396-abf8-e7b79072f523] Pending
helpers_test.go:344: "task-pv-pod-restore" [b9486710-4793-4396-abf8-e7b79072f523] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b9486710-4793-4396-abf8-e7b79072f523] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.002956702s
addons_test.go:553: (dbg) Run:  kubectl --context addons-712202 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-712202 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-712202 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-712202 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-712202 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-712202 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.556705717s)
--- PASS: TestAddons/parallel/CSI (57.97s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-712202 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-hrdk4" [de17b848-c426-45da-9ee1-412baac9d1c7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-hrdk4" [de17b848-c426-45da-9ee1-412baac9d1c7] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003499841s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-712202 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-712202 addons disable headlamp --alsologtostderr -v=1: (5.781278654s)
--- PASS: TestAddons/parallel/Headlamp (16.58s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-754dc876cd-n256p" [7d761ce7-4877-4cee-949c-8209b00e9593] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002870834s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-712202 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.92s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-712202 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-712202 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-712202 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [eb18c410-8163-43df-aac8-04e8b48db6a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [eb18c410-8163-43df-aac8-04e8b48db6a7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [eb18c410-8163-43df-aac8-04e8b48db6a7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003630624s
addons_test.go:906: (dbg) Run:  kubectl --context addons-712202 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-712202 ssh "cat /opt/local-path-provisioner/pvc-df23c300-f1b4-44c9-9ece-23b6f3d827c9_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-712202 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-712202 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-712202 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-712202 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.980515337s)
--- PASS: TestAddons/parallel/LocalPath (54.92s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9bqgq" [ada139a9-5779-4621-bb84-4626d269af82] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.002822768s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-712202 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-f4m2n" [8e756fc4-f449-4684-9197-2f2d38da6288] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003209399s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-712202 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-712202 addons disable yakd --alsologtostderr -v=1: (5.605288274s)
--- PASS: TestAddons/parallel/Yakd (10.61s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-kfnbr" [579fbe2d-d81e-42b2-b5f1-ecce2f176754] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.002552577s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-712202 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.70s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.1s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-712202
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-712202: (11.863566919s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-712202
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-712202
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-712202
--- PASS: TestAddons/StoppedEnableDisable (12.10s)

                                                
                                    
x
+
TestCertOptions (24.61s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-442523 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-442523 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (22.045496085s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-442523 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-442523 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-442523 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-442523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-442523
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-442523: (1.947719309s)
--- PASS: TestCertOptions (24.61s)

                                                
                                    
x
+
TestCertExpiration (217.96s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-196744 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-196744 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (30.908004864s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-196744 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-196744 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (4.820252272s)
helpers_test.go:175: Cleaning up "cert-expiration-196744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-196744
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-196744: (2.229372488s)
--- PASS: TestCertExpiration (217.96s)

                                                
                                    
x
+
TestForceSystemdFlag (26.36s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-408852 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-408852 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (24.115538279s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-408852 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-408852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-408852
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-408852: (1.993012393s)
--- PASS: TestForceSystemdFlag (26.36s)

                                                
                                    
x
+
TestForceSystemdEnv (28.22s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-616333 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-616333 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (25.922331235s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-616333 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-616333" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-616333
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-616333: (2.011611491s)
--- PASS: TestForceSystemdEnv (28.22s)

                                                
                                    
x
+
TestDockerEnvContainerd (34.83s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-038371 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-038371 --driver=docker  --container-runtime=containerd: (19.399509375s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-038371"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-dYxyyTZjrk6S/agent.38763" SSH_AGENT_PID="38764" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-dYxyyTZjrk6S/agent.38763" SSH_AGENT_PID="38764" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-dYxyyTZjrk6S/agent.38763" SSH_AGENT_PID="38764" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.723400027s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-dYxyyTZjrk6S/agent.38763" SSH_AGENT_PID="38764" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-038371" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-038371
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-038371: (1.832250437s)
--- PASS: TestDockerEnvContainerd (34.83s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.75s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0317 10:57:01.924022   11690 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0317 10:57:01.924209   11690 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0317 10:57:01.960867   11690 install.go:62] docker-machine-driver-kvm2: exit status 1
W0317 10:57:01.961060   11690 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0317 10:57:01.961131   11690 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3522620898/001/docker-machine-driver-kvm2
I0317 10:57:02.196926   11690 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3522620898/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc0007339a8 gz:0xc000733a30 tar:0xc0007339e0 tar.bz2:0xc0007339f0 tar.gz:0xc000733a00 tar.xz:0xc000733a10 tar.zst:0xc000733a20 tbz2:0xc0007339f0 tgz:0xc000733a00 txz:0xc000733a10 tzst:0xc000733a20 xz:0xc000733a38 zip:0xc000733a40 zst:0xc000733a50] Getters:map[file:0xc001b782e0 http:0xc000d8e320 https:0xc000d8e370] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0317 10:57:02.196972   11690 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3522620898/001/docker-machine-driver-kvm2
I0317 10:57:04.805975   11690 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0317 10:57:04.806075   11690 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0317 10:57:04.840839   11690 install.go:137] /home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0317 10:57:04.840873   11690 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0317 10:57:04.840949   11690 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0317 10:57:04.840981   11690 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3522620898/002/docker-machine-driver-kvm2
I0317 10:57:04.867855   11690 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3522620898/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc0007339a8 gz:0xc000733a30 tar:0xc0007339e0 tar.bz2:0xc0007339f0 tar.gz:0xc000733a00 tar.xz:0xc000733a10 tar.zst:0xc000733a20 tbz2:0xc0007339f0 tgz:0xc000733a00 txz:0xc000733a10 tzst:0xc000733a20 xz:0xc000733a38 zip:0xc000733a40 zst:0xc000733a50] Getters:map[file:0xc001b791f0 http:0xc000d8f4a0 https:0xc000d8f4f0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0317 10:57:04.867908   11690 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3522620898/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.75s)

                                                
                                    
x
+
TestErrorSpam/setup (18.27s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-326113 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-326113 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-326113 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-326113 --driver=docker  --container-runtime=containerd: (18.268841999s)
--- PASS: TestErrorSpam/setup (18.27s)

                                                
                                    
x
+
TestErrorSpam/start (0.54s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-326113 --log_dir /tmp/nospam-326113 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-326113 --log_dir /tmp/nospam-326113 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-326113 --log_dir /tmp/nospam-326113 start --dry-run
--- PASS: TestErrorSpam/start (0.54s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-326113 --log_dir /tmp/nospam-326113 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-326113 --log_dir /tmp/nospam-326113 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-326113 --log_dir /tmp/nospam-326113 status
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-326113 --log_dir /tmp/nospam-326113 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-326113 --log_dir /tmp/nospam-326113 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-326113 --log_dir /tmp/nospam-326113 pause
--- PASS: TestErrorSpam/pause (1.45s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-326113 --log_dir /tmp/nospam-326113 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-326113 --log_dir /tmp/nospam-326113 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-326113 --log_dir /tmp/nospam-326113 unpause
--- PASS: TestErrorSpam/unpause (1.66s)

                                                
                                    
x
+
TestErrorSpam/stop (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-326113 --log_dir /tmp/nospam-326113 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-326113 --log_dir /tmp/nospam-326113 stop: (1.163968376s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-326113 --log_dir /tmp/nospam-326113 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-326113 --log_dir /tmp/nospam-326113 stop
--- PASS: TestErrorSpam/stop (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/test/nested/copy/11690/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.48s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-793863 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-793863 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (50.475432991s)
--- PASS: TestFunctional/serial/StartWithProxy (50.48s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.04s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0317 10:33:42.768268   11690 config.go:182] Loaded profile config "functional-793863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-793863 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-793863 --alsologtostderr -v=8: (5.038944969s)
functional_test.go:680: soft start took 5.03962656s for "functional-793863" cluster.
I0317 10:33:47.807539   11690 config.go:182] Loaded profile config "functional-793863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (5.04s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-793863 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-793863 cache add registry.k8s.io/pause:3.3: (1.078561211s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-793863 /tmp/TestFunctionalserialCacheCmdcacheadd_local745522389/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 cache add minikube-local-cache-test:functional-793863
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-793863 cache add minikube-local-cache-test:functional-793863: (1.566037266s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 cache delete minikube-local-cache-test:functional-793863
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-793863
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-793863 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (261.715023ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 kubectl -- --context functional-793863 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-793863 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.42s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-793863 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0317 10:34:14.795453   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:34:14.801846   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:34:14.813194   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:34:14.834575   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:34:14.875969   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:34:14.957458   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:34:15.118999   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:34:15.440421   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:34:16.082428   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:34:17.363977   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:34:19.925561   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:34:25.047017   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:34:35.288503   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-793863 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.42020448s)
functional_test.go:778: restart took 42.420318739s for "functional-793863" cluster.
I0317 10:34:37.377867   11690 config.go:182] Loaded profile config "functional-793863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (42.42s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-793863 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-793863 logs: (1.291515286s)
--- PASS: TestFunctional/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 logs --file /tmp/TestFunctionalserialLogsFileCmd4062296935/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-793863 logs --file /tmp/TestFunctionalserialLogsFileCmd4062296935/001/logs.txt: (1.294466482s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.98s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-793863 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-793863
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-793863: exit status 115 (307.24725ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30646 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-793863 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-793863 config get cpus: exit status 14 (80.907411ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-793863 config get cpus: exit status 14 (44.174823ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-793863 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-793863 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 60461: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-793863 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-793863 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (157.111093ms)

                                                
                                                
-- stdout --
	* [functional-793863] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20535
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 10:35:06.058732   59367 out.go:345] Setting OutFile to fd 1 ...
	I0317 10:35:06.058828   59367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:35:06.058839   59367 out.go:358] Setting ErrFile to fd 2...
	I0317 10:35:06.058844   59367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:35:06.059004   59367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
	I0317 10:35:06.059684   59367 out.go:352] Setting JSON to false
	I0317 10:35:06.060777   59367 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":999,"bootTime":1742206707,"procs":259,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 10:35:06.060842   59367 start.go:139] virtualization: kvm guest
	I0317 10:35:06.063040   59367 out.go:177] * [functional-793863] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 10:35:06.064948   59367 out.go:177]   - MINIKUBE_LOCATION=20535
	I0317 10:35:06.064979   59367 notify.go:220] Checking for updates...
	I0317 10:35:06.066337   59367 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 10:35:06.068137   59367 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 10:35:06.069496   59367 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	I0317 10:35:06.071286   59367 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 10:35:06.072542   59367 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 10:35:06.074259   59367 config.go:182] Loaded profile config "functional-793863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 10:35:06.074937   59367 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 10:35:06.101405   59367 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 10:35:06.101535   59367 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 10:35:06.153453   59367 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-03-17 10:35:06.14372272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 10:35:06.153557   59367 docker.go:318] overlay module found
	I0317 10:35:06.156136   59367 out.go:177] * Using the docker driver based on existing profile
	I0317 10:35:06.157311   59367 start.go:297] selected driver: docker
	I0317 10:35:06.157328   59367 start.go:901] validating driver "docker" against &{Name:functional-793863 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-793863 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 10:35:06.157403   59367 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 10:35:06.159316   59367 out.go:201] 
	W0317 10:35:06.160813   59367 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0317 10:35:06.162126   59367 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-793863 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-793863 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-793863 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (150.865002ms)

                                                
                                                
-- stdout --
	* [functional-793863] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20535
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 10:35:06.423209   59684 out.go:345] Setting OutFile to fd 1 ...
	I0317 10:35:06.423371   59684 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:35:06.423382   59684 out.go:358] Setting ErrFile to fd 2...
	I0317 10:35:06.423387   59684 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:35:06.423635   59684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
	I0317 10:35:06.424165   59684 out.go:352] Setting JSON to false
	I0317 10:35:06.425250   59684 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":999,"bootTime":1742206707,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 10:35:06.425358   59684 start.go:139] virtualization: kvm guest
	I0317 10:35:06.427474   59684 out.go:177] * [functional-793863] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0317 10:35:06.428873   59684 notify.go:220] Checking for updates...
	I0317 10:35:06.428919   59684 out.go:177]   - MINIKUBE_LOCATION=20535
	I0317 10:35:06.430289   59684 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 10:35:06.431679   59684 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 10:35:06.433139   59684 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	I0317 10:35:06.434486   59684 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 10:35:06.435730   59684 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 10:35:06.437310   59684 config.go:182] Loaded profile config "functional-793863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 10:35:06.437745   59684 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 10:35:06.461286   59684 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 10:35:06.461383   59684 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 10:35:06.514130   59684 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-03-17 10:35:06.504052425 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 10:35:06.514234   59684 docker.go:318] overlay module found
	I0317 10:35:06.516254   59684 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0317 10:35:06.517660   59684 start.go:297] selected driver: docker
	I0317 10:35:06.517678   59684 start.go:901] validating driver "docker" against &{Name:functional-793863 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-793863 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 10:35:06.517790   59684 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 10:35:06.520357   59684 out.go:201] 
	W0317 10:35:06.521584   59684 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0317 10:35:06.523007   59684 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-793863 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-793863 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-hhqd8" [7cca182f-72e7-4f6b-b223-e0ec40e00406] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-hhqd8" [7cca182f-72e7-4f6b-b223-e0ec40e00406] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003756814s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:32600
functional_test.go:1692: http://192.168.49.2:32600: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-hhqd8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32600
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.83s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (37.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [aca348e1-7df7-471f-9e54-8ae978f0a194] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003376439s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-793863 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-793863 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-793863 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-793863 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9af8999b-32ba-4ec9-b584-5a3a9b18c2e2] Pending
helpers_test.go:344: "sp-pod" [9af8999b-32ba-4ec9-b584-5a3a9b18c2e2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9af8999b-32ba-4ec9-b584-5a3a9b18c2e2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.003970442s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-793863 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-793863 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-793863 delete -f testdata/storage-provisioner/pod.yaml: (1.411297275s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-793863 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d3ef8cba-4803-4d46-8382-b3879935e285] Pending
helpers_test.go:344: "sp-pod" [d3ef8cba-4803-4d46-8382-b3879935e285] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d3ef8cba-4803-4d46-8382-b3879935e285] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003110909s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-793863 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (37.24s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh -n functional-793863 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 cp functional-793863:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1476244646/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh -n functional-793863 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh -n functional-793863 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-793863 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-4wgx2" [ba2fdc5a-a405-4cc0-904d-9006dff7db4c] Pending
helpers_test.go:344: "mysql-58ccfd96bb-4wgx2" [ba2fdc5a-a405-4cc0-904d-9006dff7db4c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-4wgx2" [ba2fdc5a-a405-4cc0-904d-9006dff7db4c] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.04028772s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-793863 exec mysql-58ccfd96bb-4wgx2 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-793863 exec mysql-58ccfd96bb-4wgx2 -- mysql -ppassword -e "show databases;": exit status 1 (216.543806ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0317 10:35:01.622558   11690 retry.go:31] will retry after 1.350963991s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-793863 exec mysql-58ccfd96bb-4wgx2 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-793863 exec mysql-58ccfd96bb-4wgx2 -- mysql -ppassword -e "show databases;": exit status 1 (114.264596ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0317 10:35:03.088941   11690 retry.go:31] will retry after 815.585864ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-793863 exec mysql-58ccfd96bb-4wgx2 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-793863 exec mysql-58ccfd96bb-4wgx2 -- mysql -ppassword -e "show databases;": exit status 1 (118.126552ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0317 10:35:04.023233   11690 retry.go:31] will retry after 1.84216125s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-793863 exec mysql-58ccfd96bb-4wgx2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (19.84s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/11690/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh "sudo cat /etc/test/nested/copy/11690/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/11690.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh "sudo cat /etc/ssl/certs/11690.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/11690.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh "sudo cat /usr/share/ca-certificates/11690.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/116902.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh "sudo cat /etc/ssl/certs/116902.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/116902.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh "sudo cat /usr/share/ca-certificates/116902.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-793863 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-793863 ssh "sudo systemctl is-active docker": exit status 1 (271.646651ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-793863 ssh "sudo systemctl is-active crio": exit status 1 (275.756618ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-793863 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-793863 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-75q7t" [4274ae5a-28bd-49df-abfc-6f184d2c5dad] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-75q7t" [4274ae5a-28bd-49df-abfc-6f184d2c5dad] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004480794s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (18.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-793863 /tmp/TestFunctionalparallelMountCmdany-port1792829223/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1742207684610887832" to /tmp/TestFunctionalparallelMountCmdany-port1792829223/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1742207684610887832" to /tmp/TestFunctionalparallelMountCmdany-port1792829223/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1742207684610887832" to /tmp/TestFunctionalparallelMountCmdany-port1792829223/001/test-1742207684610887832
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-793863 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (286.207391ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0317 10:34:44.897430   11690 retry.go:31] will retry after 736.171832ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 17 10:34 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 17 10:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 17 10:34 test-1742207684610887832
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh cat /mount-9p/test-1742207684610887832
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-793863 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c8cf1a8a-8562-48f1-bad4-38f42e847d7b] Pending
helpers_test.go:344: "busybox-mount" [c8cf1a8a-8562-48f1-bad4-38f42e847d7b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c8cf1a8a-8562-48f1-bad4-38f42e847d7b] Running
helpers_test.go:344: "busybox-mount" [c8cf1a8a-8562-48f1-bad4-38f42e847d7b] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c8cf1a8a-8562-48f1-bad4-38f42e847d7b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 15.004171937s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-793863 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-793863 /tmp/TestFunctionalparallelMountCmdany-port1792829223/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (18.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "350.435102ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "51.697922ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "441.495085ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "78.540244ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 service list -o json
functional_test.go:1511: Took "404.669667ms" to run "out/minikube-linux-amd64 -p functional-793863 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:30905
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:30905
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-793863 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-793863 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-793863 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-793863 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 57118: os: process already finished
helpers_test.go:508: unable to kill pid 56961: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-793863 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-793863 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [188a9449-98c2-46e9-8d19-fdb803db62da] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0317 10:34:55.770461   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "nginx-svc" [188a9449-98c2-46e9-8d19-fdb803db62da] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.00381915s
I0317 10:35:11.373064   11690 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-793863 /tmp/TestFunctionalparallelMountCmdspecific-port4182948081/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-793863 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (299.210523ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0317 10:35:03.130573   11690 retry.go:31] will retry after 540.758615ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-793863 /tmp/TestFunctionalparallelMountCmdspecific-port4182948081/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-793863 ssh "sudo umount -f /mount-9p": exit status 1 (251.455228ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-793863 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-793863 /tmp/TestFunctionalparallelMountCmdspecific-port4182948081/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-793863 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3946834211/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-793863 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3946834211/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-793863 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3946834211/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-793863 ssh "findmnt -T" /mount1: exit status 1 (311.655373ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0317 10:35:04.956949   11690 retry.go:31] will retry after 397.410944ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-793863 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-793863 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3946834211/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-793863 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3946834211/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-793863 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3946834211/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-793863 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-793863
docker.io/kindest/kindnetd:v20250214-acbabc1a
docker.io/kindest/kindnetd:v20241212-9f82dd49
docker.io/kicbase/echo-server:functional-793863
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-793863 image ls --format short --alsologtostderr:
I0317 10:35:16.596626   62339 out.go:345] Setting OutFile to fd 1 ...
I0317 10:35:16.596720   62339 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 10:35:16.596727   62339 out.go:358] Setting ErrFile to fd 2...
I0317 10:35:16.596731   62339 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 10:35:16.596914   62339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
I0317 10:35:16.597435   62339 config.go:182] Loaded profile config "functional-793863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 10:35:16.597526   62339 config.go:182] Loaded profile config "functional-793863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 10:35:16.597848   62339 cli_runner.go:164] Run: docker container inspect functional-793863 --format={{.State.Status}}
I0317 10:35:16.616694   62339 ssh_runner.go:195] Run: systemctl --version
I0317 10:35:16.616744   62339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-793863
I0317 10:35:16.637007   62339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/functional-793863/id_rsa Username:docker}
I0317 10:35:16.731828   62339 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-793863 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/kube-controller-manager     | v1.32.2            | sha256:b6a454 | 26.3MB |
| registry.k8s.io/pause                       | 3.10               | sha256:873ed7 | 320kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/library/nginx                     | alpine             | sha256:1ff4bb | 20.8MB |
| registry.k8s.io/kube-proxy                  | v1.32.2            | sha256:f13328 | 30.9MB |
| registry.k8s.io/kube-scheduler              | v1.32.2            | sha256:d8e673 | 20.7MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/kindest/kindnetd                  | v20250214-acbabc1a | sha256:df3849 | 39MB   |
| docker.io/library/nginx                     | latest             | sha256:b52e0b | 72.2MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:c69fa2 | 18.6MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| docker.io/kicbase/echo-server               | functional-793863  | sha256:9056ab | 2.37MB |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:a9e7e6 | 57.7MB |
| registry.k8s.io/kube-apiserver              | v1.32.2            | sha256:85b7a1 | 28.7MB |
| docker.io/kindest/kindnetd                  | v20241212-9f82dd49 | sha256:d30084 | 39MB   |
| docker.io/library/minikube-local-cache-test | functional-793863  | sha256:87ef80 | 991B   |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-793863 image ls --format table --alsologtostderr:
I0317 10:35:17.564484   62763 out.go:345] Setting OutFile to fd 1 ...
I0317 10:35:17.564737   62763 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 10:35:17.564748   62763 out.go:358] Setting ErrFile to fd 2...
I0317 10:35:17.564751   62763 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 10:35:17.564911   62763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
I0317 10:35:17.565429   62763 config.go:182] Loaded profile config "functional-793863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 10:35:17.565516   62763 config.go:182] Loaded profile config "functional-793863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 10:35:17.565858   62763 cli_runner.go:164] Run: docker container inspect functional-793863 --format={{.State.Status}}
I0317 10:35:17.582530   62763 ssh_runner.go:195] Run: systemctl --version
I0317 10:35:17.582572   62763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-793863
I0317 10:35:17.598836   62763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/functional-793863/id_rsa Username:docker}
I0317 10:35:17.687661   62763 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-793863 image ls --format json --alsologtostderr:
[{"id":"sha256:d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56","repoDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"39008320"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"57680541"},{"id":"sha256:df3849d954c98a7162c7bee7313ece357606e313d9
8ebd68b7aac5e961b1156f","repoDigests":["docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495"],"repoTags":["docker.io/kindest/kindnetd:v20250214-acbabc1a"],"size":"38996835"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-793863"],"size":"2372971"},{"id":"sha256:87ef80d645eceb4d5406b9c76926a359a2748adb688ad7034d916773eeaf6ff4","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-793863"],"size":"991"},{"id":"sha256:b52e0b094bc0e26c9eddc9e4ab7a64ce0033c3360d8b7ad4ff4132c4e03e8f7b","repoDigests":["docker.io/library/nginx@sha256:9d6b58feebd2dbd3c56ab5853333d627cc6e281011cfd6050fa4bcf2072c9496"],"repoTags":["docker.io/library/nginx:latest"],"size":"72195292"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a
8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"18562039"},{"id":"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":["registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"28670731"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc
6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":["registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"30907858"},{"id":"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"20657902"},{"id":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"320368"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags
":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07","repoDigests":["docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591"],"repoTags":["docker.io/library/nginx:alpine"],"size":"20834790"},{"id":"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc0
86621b0670bca912efaf389","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"26259392"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-793863 image ls --format json --alsologtostderr:
I0317 10:35:17.336845   62594 out.go:345] Setting OutFile to fd 1 ...
I0317 10:35:17.336991   62594 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 10:35:17.337001   62594 out.go:358] Setting ErrFile to fd 2...
I0317 10:35:17.337007   62594 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 10:35:17.337553   62594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
I0317 10:35:17.338377   62594 config.go:182] Loaded profile config "functional-793863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 10:35:17.338509   62594 config.go:182] Loaded profile config "functional-793863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 10:35:17.339037   62594 cli_runner.go:164] Run: docker container inspect functional-793863 --format={{.State.Status}}
I0317 10:35:17.358623   62594 ssh_runner.go:195] Run: systemctl --version
I0317 10:35:17.358670   62594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-793863
I0317 10:35:17.380572   62594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/functional-793863/id_rsa Username:docker}
I0317 10:35:17.475682   62594 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-793863 image ls --format yaml --alsologtostderr:
- id: sha256:d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "39008320"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "28670731"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "57680541"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-793863
size: "2372971"
- id: sha256:df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f
repoDigests:
- docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495
repoTags:
- docker.io/kindest/kindnetd:v20250214-acbabc1a
size: "38996835"
- id: sha256:87ef80d645eceb4d5406b9c76926a359a2748adb688ad7034d916773eeaf6ff4
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-793863
size: "991"
- id: sha256:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07
repoDigests:
- docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591
repoTags:
- docker.io/library/nginx:alpine
size: "20834790"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "30907858"
- id: sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "20657902"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:b52e0b094bc0e26c9eddc9e4ab7a64ce0033c3360d8b7ad4ff4132c4e03e8f7b
repoDigests:
- docker.io/library/nginx@sha256:9d6b58feebd2dbd3c56ab5853333d627cc6e281011cfd6050fa4bcf2072c9496
repoTags:
- docker.io/library/nginx:latest
size: "72195292"
- id: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "18562039"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "26259392"
- id: sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "320368"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-793863 image ls --format yaml --alsologtostderr:
I0317 10:35:16.820806   62426 out.go:345] Setting OutFile to fd 1 ...
I0317 10:35:16.821126   62426 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 10:35:16.821139   62426 out.go:358] Setting ErrFile to fd 2...
I0317 10:35:16.821146   62426 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 10:35:16.821361   62426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
I0317 10:35:16.821933   62426 config.go:182] Loaded profile config "functional-793863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 10:35:16.822029   62426 config.go:182] Loaded profile config "functional-793863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 10:35:16.822365   62426 cli_runner.go:164] Run: docker container inspect functional-793863 --format={{.State.Status}}
I0317 10:35:16.840889   62426 ssh_runner.go:195] Run: systemctl --version
I0317 10:35:16.840931   62426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-793863
I0317 10:35:16.857644   62426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/functional-793863/id_rsa Username:docker}
I0317 10:35:16.951882   62426 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 ssh pgrep buildkitd
2025/03/17 10:35:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-793863 ssh pgrep buildkitd: exit status 1 (242.75834ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 image build -t localhost/my-image:functional-793863 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-793863 image build -t localhost/my-image:functional-793863 testdata/build --alsologtostderr: (3.338921942s)
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-793863 image build -t localhost/my-image:functional-793863 testdata/build --alsologtostderr:
I0317 10:35:17.272592   62571 out.go:345] Setting OutFile to fd 1 ...
I0317 10:35:17.273046   62571 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 10:35:17.273092   62571 out.go:358] Setting ErrFile to fd 2...
I0317 10:35:17.273109   62571 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 10:35:17.273547   62571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
I0317 10:35:17.274454   62571 config.go:182] Loaded profile config "functional-793863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 10:35:17.274904   62571 config.go:182] Loaded profile config "functional-793863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 10:35:17.275330   62571 cli_runner.go:164] Run: docker container inspect functional-793863 --format={{.State.Status}}
I0317 10:35:17.293673   62571 ssh_runner.go:195] Run: systemctl --version
I0317 10:35:17.293733   62571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-793863
I0317 10:35:17.315638   62571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/functional-793863/id_rsa Username:docker}
I0317 10:35:17.407301   62571 build_images.go:161] Building image from path: /tmp/build.986826970.tar
I0317 10:35:17.407399   62571 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0317 10:35:17.416192   62571 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.986826970.tar
I0317 10:35:17.419238   62571 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.986826970.tar: stat -c "%s %y" /var/lib/minikube/build/build.986826970.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.986826970.tar': No such file or directory
I0317 10:35:17.419331   62571 ssh_runner.go:362] scp /tmp/build.986826970.tar --> /var/lib/minikube/build/build.986826970.tar (3072 bytes)
I0317 10:35:17.440346   62571 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.986826970
I0317 10:35:17.447985   62571 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.986826970 -xf /var/lib/minikube/build/build.986826970.tar
I0317 10:35:17.455570   62571 containerd.go:394] Building image: /var/lib/minikube/build/build.986826970
I0317 10:35:17.455644   62571 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.986826970 --local dockerfile=/var/lib/minikube/build/build.986826970 --output type=image,name=localhost/my-image:functional-793863
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:d215e762d01daa83bb7bed10b1aab8434d2dc9682e062cd2dff3f153fca5b75a done
#8 exporting config sha256:90ca590b8b7a4692896d8ddaa3d16d07be4bd5a058a14821a11d8ea6d4d1ff6c done
#8 naming to localhost/my-image:functional-793863 done
#8 DONE 0.1s
I0317 10:35:20.547510   62571 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.986826970 --local dockerfile=/var/lib/minikube/build/build.986826970 --output type=image,name=localhost/my-image:functional-793863: (3.091836925s)
I0317 10:35:20.547590   62571 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.986826970
I0317 10:35:20.555797   62571 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.986826970.tar
I0317 10:35:20.564208   62571 build_images.go:217] Built localhost/my-image:functional-793863 from /tmp/build.986826970.tar
I0317 10:35:20.564235   62571 build_images.go:133] succeeded building to: functional-793863
I0317 10:35:20.564239   62571 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.756218492s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-793863
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 image load --daemon kicbase/echo-server:functional-793863 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 image load --daemon kicbase/echo-server:functional-793863 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-793863
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 image load --daemon kicbase/echo-server:functional-793863 --alsologtostderr
functional_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p functional-793863 image load --daemon kicbase/echo-server:functional-793863 --alsologtostderr: (1.233761632s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-793863 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.169.105 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-793863 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 image save kicbase/echo-server:functional-793863 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 image rm kicbase/echo-server:functional-793863 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-793863
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-793863 image save --daemon kicbase/echo-server:functional-793863 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-793863
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-793863
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-793863
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-793863
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (97.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-983067 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0317 10:35:36.732195   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:36:58.654202   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-983067 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m37.144627959s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (97.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-983067 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-983067 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-983067 -- rollout status deployment/busybox: (4.165234319s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-983067 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-983067 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-983067 -- exec busybox-58667487b6-292z6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-983067 -- exec busybox-58667487b6-4fcf9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-983067 -- exec busybox-58667487b6-mzq8r -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-983067 -- exec busybox-58667487b6-292z6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-983067 -- exec busybox-58667487b6-4fcf9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-983067 -- exec busybox-58667487b6-mzq8r -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-983067 -- exec busybox-58667487b6-292z6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-983067 -- exec busybox-58667487b6-4fcf9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-983067 -- exec busybox-58667487b6-mzq8r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-983067 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-983067 -- exec busybox-58667487b6-292z6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-983067 -- exec busybox-58667487b6-292z6 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-983067 -- exec busybox-58667487b6-4fcf9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-983067 -- exec busybox-58667487b6-4fcf9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-983067 -- exec busybox-58667487b6-mzq8r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-983067 -- exec busybox-58667487b6-mzq8r -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-983067 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-983067 -v=7 --alsologtostderr: (20.064764743s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-983067 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 cp testdata/cp-test.txt ha-983067:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 cp ha-983067:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile321939201/001/cp-test_ha-983067.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 cp ha-983067:/home/docker/cp-test.txt ha-983067-m02:/home/docker/cp-test_ha-983067_ha-983067-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m02 "sudo cat /home/docker/cp-test_ha-983067_ha-983067-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 cp ha-983067:/home/docker/cp-test.txt ha-983067-m03:/home/docker/cp-test_ha-983067_ha-983067-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m03 "sudo cat /home/docker/cp-test_ha-983067_ha-983067-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 cp ha-983067:/home/docker/cp-test.txt ha-983067-m04:/home/docker/cp-test_ha-983067_ha-983067-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m04 "sudo cat /home/docker/cp-test_ha-983067_ha-983067-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 cp testdata/cp-test.txt ha-983067-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 cp ha-983067-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile321939201/001/cp-test_ha-983067-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 cp ha-983067-m02:/home/docker/cp-test.txt ha-983067:/home/docker/cp-test_ha-983067-m02_ha-983067.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067 "sudo cat /home/docker/cp-test_ha-983067-m02_ha-983067.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 cp ha-983067-m02:/home/docker/cp-test.txt ha-983067-m03:/home/docker/cp-test_ha-983067-m02_ha-983067-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m03 "sudo cat /home/docker/cp-test_ha-983067-m02_ha-983067-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 cp ha-983067-m02:/home/docker/cp-test.txt ha-983067-m04:/home/docker/cp-test_ha-983067-m02_ha-983067-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m04 "sudo cat /home/docker/cp-test_ha-983067-m02_ha-983067-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 cp testdata/cp-test.txt ha-983067-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 cp ha-983067-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile321939201/001/cp-test_ha-983067-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 cp ha-983067-m03:/home/docker/cp-test.txt ha-983067:/home/docker/cp-test_ha-983067-m03_ha-983067.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067 "sudo cat /home/docker/cp-test_ha-983067-m03_ha-983067.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 cp ha-983067-m03:/home/docker/cp-test.txt ha-983067-m02:/home/docker/cp-test_ha-983067-m03_ha-983067-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m02 "sudo cat /home/docker/cp-test_ha-983067-m03_ha-983067-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 cp ha-983067-m03:/home/docker/cp-test.txt ha-983067-m04:/home/docker/cp-test_ha-983067-m03_ha-983067-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m04 "sudo cat /home/docker/cp-test_ha-983067-m03_ha-983067-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 cp testdata/cp-test.txt ha-983067-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 cp ha-983067-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile321939201/001/cp-test_ha-983067-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 cp ha-983067-m04:/home/docker/cp-test.txt ha-983067:/home/docker/cp-test_ha-983067-m04_ha-983067.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067 "sudo cat /home/docker/cp-test_ha-983067-m04_ha-983067.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 cp ha-983067-m04:/home/docker/cp-test.txt ha-983067-m02:/home/docker/cp-test_ha-983067-m04_ha-983067-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m02 "sudo cat /home/docker/cp-test_ha-983067-m04_ha-983067-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 cp ha-983067-m04:/home/docker/cp-test.txt ha-983067-m03:/home/docker/cp-test_ha-983067-m04_ha-983067-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 ssh -n ha-983067-m03 "sudo cat /home/docker/cp-test_ha-983067-m04_ha-983067-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-983067 node stop m02 -v=7 --alsologtostderr: (11.813532945s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-983067 status -v=7 --alsologtostderr: exit status 7 (647.519274ms)

                                                
                                                
-- stdout --
	ha-983067
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-983067-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-983067-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-983067-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 10:38:01.580215   83717 out.go:345] Setting OutFile to fd 1 ...
	I0317 10:38:01.580317   83717 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:38:01.580326   83717 out.go:358] Setting ErrFile to fd 2...
	I0317 10:38:01.580330   83717 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:38:01.580518   83717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
	I0317 10:38:01.580669   83717 out.go:352] Setting JSON to false
	I0317 10:38:01.580695   83717 mustload.go:65] Loading cluster: ha-983067
	I0317 10:38:01.580745   83717 notify.go:220] Checking for updates...
	I0317 10:38:01.581051   83717 config.go:182] Loaded profile config "ha-983067": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 10:38:01.581067   83717 status.go:174] checking status of ha-983067 ...
	I0317 10:38:01.581459   83717 cli_runner.go:164] Run: docker container inspect ha-983067 --format={{.State.Status}}
	I0317 10:38:01.599263   83717 status.go:371] ha-983067 host status = "Running" (err=<nil>)
	I0317 10:38:01.599293   83717 host.go:66] Checking if "ha-983067" exists ...
	I0317 10:38:01.599508   83717 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-983067
	I0317 10:38:01.616529   83717 host.go:66] Checking if "ha-983067" exists ...
	I0317 10:38:01.616845   83717 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 10:38:01.616906   83717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-983067
	I0317 10:38:01.636340   83717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/ha-983067/id_rsa Username:docker}
	I0317 10:38:01.728598   83717 ssh_runner.go:195] Run: systemctl --version
	I0317 10:38:01.732924   83717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 10:38:01.743010   83717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 10:38:01.791594   83717 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-03-17 10:38:01.782960198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 10:38:01.792119   83717 kubeconfig.go:125] found "ha-983067" server: "https://192.168.49.254:8443"
	I0317 10:38:01.792149   83717 api_server.go:166] Checking apiserver status ...
	I0317 10:38:01.792188   83717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 10:38:01.802722   83717 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1583/cgroup
	I0317 10:38:01.811336   83717 api_server.go:182] apiserver freezer: "10:freezer:/docker/4d752a851fcb1aaa76f2dd49c48e02a0a76d1ce5fd719359bdae0099d53e2284/kubepods/burstable/pod301104219ad9b4bb32912f6d4b84d1c1/6395bc5c6cdc384b3f260d446a234d464df858955aa7299c4dd15674fc67b136"
	I0317 10:38:01.811398   83717 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4d752a851fcb1aaa76f2dd49c48e02a0a76d1ce5fd719359bdae0099d53e2284/kubepods/burstable/pod301104219ad9b4bb32912f6d4b84d1c1/6395bc5c6cdc384b3f260d446a234d464df858955aa7299c4dd15674fc67b136/freezer.state
	I0317 10:38:01.819483   83717 api_server.go:204] freezer state: "THAWED"
	I0317 10:38:01.819516   83717 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0317 10:38:01.823923   83717 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0317 10:38:01.823951   83717 status.go:463] ha-983067 apiserver status = Running (err=<nil>)
	I0317 10:38:01.823963   83717 status.go:176] ha-983067 status: &{Name:ha-983067 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 10:38:01.823982   83717 status.go:174] checking status of ha-983067-m02 ...
	I0317 10:38:01.824232   83717 cli_runner.go:164] Run: docker container inspect ha-983067-m02 --format={{.State.Status}}
	I0317 10:38:01.842973   83717 status.go:371] ha-983067-m02 host status = "Stopped" (err=<nil>)
	I0317 10:38:01.843022   83717 status.go:384] host is not running, skipping remaining checks
	I0317 10:38:01.843032   83717 status.go:176] ha-983067-m02 status: &{Name:ha-983067-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 10:38:01.843057   83717 status.go:174] checking status of ha-983067-m03 ...
	I0317 10:38:01.843391   83717 cli_runner.go:164] Run: docker container inspect ha-983067-m03 --format={{.State.Status}}
	I0317 10:38:01.861636   83717 status.go:371] ha-983067-m03 host status = "Running" (err=<nil>)
	I0317 10:38:01.861660   83717 host.go:66] Checking if "ha-983067-m03" exists ...
	I0317 10:38:01.861899   83717 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-983067-m03
	I0317 10:38:01.879181   83717 host.go:66] Checking if "ha-983067-m03" exists ...
	I0317 10:38:01.879522   83717 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 10:38:01.879571   83717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-983067-m03
	I0317 10:38:01.897496   83717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/ha-983067-m03/id_rsa Username:docker}
	I0317 10:38:01.988252   83717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 10:38:01.999238   83717 kubeconfig.go:125] found "ha-983067" server: "https://192.168.49.254:8443"
	I0317 10:38:01.999303   83717 api_server.go:166] Checking apiserver status ...
	I0317 10:38:01.999345   83717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 10:38:02.009404   83717 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1433/cgroup
	I0317 10:38:02.017810   83717 api_server.go:182] apiserver freezer: "10:freezer:/docker/14677818d0c36a426f1d2d6386382deb12432ede9a7a1df9c810ef6283a32841/kubepods/burstable/pod0156b391216edbce5280bdaf18128468/a91ab586c475f7d91afe886350d674aac371e2446d32892dd138b96b328322fe"
	I0317 10:38:02.017888   83717 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/14677818d0c36a426f1d2d6386382deb12432ede9a7a1df9c810ef6283a32841/kubepods/burstable/pod0156b391216edbce5280bdaf18128468/a91ab586c475f7d91afe886350d674aac371e2446d32892dd138b96b328322fe/freezer.state
	I0317 10:38:02.025397   83717 api_server.go:204] freezer state: "THAWED"
	I0317 10:38:02.025423   83717 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0317 10:38:02.028957   83717 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0317 10:38:02.028983   83717 status.go:463] ha-983067-m03 apiserver status = Running (err=<nil>)
	I0317 10:38:02.028994   83717 status.go:176] ha-983067-m03 status: &{Name:ha-983067-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 10:38:02.029012   83717 status.go:174] checking status of ha-983067-m04 ...
	I0317 10:38:02.029318   83717 cli_runner.go:164] Run: docker container inspect ha-983067-m04 --format={{.State.Status}}
	I0317 10:38:02.046537   83717 status.go:371] ha-983067-m04 host status = "Running" (err=<nil>)
	I0317 10:38:02.046561   83717 host.go:66] Checking if "ha-983067-m04" exists ...
	I0317 10:38:02.046854   83717 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-983067-m04
	I0317 10:38:02.063668   83717 host.go:66] Checking if "ha-983067-m04" exists ...
	I0317 10:38:02.063928   83717 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 10:38:02.063965   83717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-983067-m04
	I0317 10:38:02.080963   83717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/ha-983067-m04/id_rsa Username:docker}
	I0317 10:38:02.171931   83717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 10:38:02.181858   83717 status.go:176] ha-983067-m04 status: &{Name:ha-983067-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (15.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-983067 node start m02 -v=7 --alsologtostderr: (14.803847317s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (15.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (131.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-983067 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-983067 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-983067 -v=7 --alsologtostderr: (36.68517611s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-983067 --wait=true -v=7 --alsologtostderr
E0317 10:39:14.789331   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:39:42.496489   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:39:44.177220   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:39:44.183635   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:39:44.194963   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:39:44.216306   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:39:44.257715   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:39:44.339270   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:39:44.500801   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:39:44.822607   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:39:45.464453   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:39:46.745739   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:39:49.307470   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:39:54.429418   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:40:04.670843   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
E0317 10:40:25.152241   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-983067 --wait=true -v=7 --alsologtostderr: (1m35.200106728s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-983067
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (131.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-983067 node delete m03 -v=7 --alsologtostderr: (8.320381321s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 stop -v=7 --alsologtostderr
E0317 10:41:06.114570   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-983067 stop -v=7 --alsologtostderr: (35.459629188s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-983067 status -v=7 --alsologtostderr: exit status 7 (102.810602ms)

                                                
                                                
-- stdout --
	ha-983067
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-983067-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-983067-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 10:41:16.550979  100854 out.go:345] Setting OutFile to fd 1 ...
	I0317 10:41:16.551373  100854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:41:16.551384  100854 out.go:358] Setting ErrFile to fd 2...
	I0317 10:41:16.551390  100854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:41:16.551567  100854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
	I0317 10:41:16.551733  100854 out.go:352] Setting JSON to false
	I0317 10:41:16.551768  100854 mustload.go:65] Loading cluster: ha-983067
	I0317 10:41:16.551850  100854 notify.go:220] Checking for updates...
	I0317 10:41:16.552302  100854 config.go:182] Loaded profile config "ha-983067": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 10:41:16.552325  100854 status.go:174] checking status of ha-983067 ...
	I0317 10:41:16.552813  100854 cli_runner.go:164] Run: docker container inspect ha-983067 --format={{.State.Status}}
	I0317 10:41:16.572883  100854 status.go:371] ha-983067 host status = "Stopped" (err=<nil>)
	I0317 10:41:16.572901  100854 status.go:384] host is not running, skipping remaining checks
	I0317 10:41:16.572906  100854 status.go:176] ha-983067 status: &{Name:ha-983067 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 10:41:16.572936  100854 status.go:174] checking status of ha-983067-m02 ...
	I0317 10:41:16.573168  100854 cli_runner.go:164] Run: docker container inspect ha-983067-m02 --format={{.State.Status}}
	I0317 10:41:16.591225  100854 status.go:371] ha-983067-m02 host status = "Stopped" (err=<nil>)
	I0317 10:41:16.591245  100854 status.go:384] host is not running, skipping remaining checks
	I0317 10:41:16.591339  100854 status.go:176] ha-983067-m02 status: &{Name:ha-983067-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 10:41:16.591359  100854 status.go:174] checking status of ha-983067-m04 ...
	I0317 10:41:16.591569  100854 cli_runner.go:164] Run: docker container inspect ha-983067-m04 --format={{.State.Status}}
	I0317 10:41:16.608252  100854 status.go:371] ha-983067-m04 host status = "Stopped" (err=<nil>)
	I0317 10:41:16.608288  100854 status.go:384] host is not running, skipping remaining checks
	I0317 10:41:16.608297  100854 status.go:176] ha-983067-m04 status: &{Name:ha-983067-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (67.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-983067 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-983067 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m7.159548469s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (67.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (35.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-983067 --control-plane -v=7 --alsologtostderr
E0317 10:42:28.036478   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-983067 --control-plane -v=7 --alsologtostderr: (35.086075891s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-983067 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (35.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (43.2s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-446731 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-446731 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (43.203871802s)
--- PASS: TestJSONOutput/start/Command (43.20s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-446731 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-446731 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.63s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-446731 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-446731 --output=json --user=testUser: (5.627506434s)
--- PASS: TestJSONOutput/stop/Command (5.63s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-772333 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-772333 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.045536ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"92752fff-f16a-4d4d-a711-65e4225f5816","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-772333] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"124a837b-8805-4c47-a1ac-032eff0eb8ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20535"}}
	{"specversion":"1.0","id":"b3f6f7a3-8396-4022-a81c-9b90c4ad4dd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d24bc5de-4ae5-4cee-ac32-82b4ea051067","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig"}}
	{"specversion":"1.0","id":"e8d44a08-9fd6-4d0e-8d60-ba1d55914481","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube"}}
	{"specversion":"1.0","id":"3b3944d3-03aa-4ac3-83f9-377d1b1b54fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a039b869-2ce1-4671-a68b-635b1c3fc1d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ac5a15d2-629a-48ba-b223-c7d3b71a357e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-772333" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-772333
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.42s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-226148 --network=
E0317 10:44:14.788505   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-226148 --network=: (31.377551519s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-226148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-226148
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-226148: (2.020530319s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.42s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.1s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-694478 --network=bridge
E0317 10:44:44.182129   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-694478 --network=bridge: (24.1447061s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-694478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-694478
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-694478: (1.94107828s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.10s)

                                                
                                    
x
+
TestKicExistingNetwork (24.46s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0317 10:45:03.241932   11690 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0317 10:45:03.258465   11690 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0317 10:45:03.258551   11690 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0317 10:45:03.258577   11690 cli_runner.go:164] Run: docker network inspect existing-network
W0317 10:45:03.274771   11690 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0317 10:45:03.274804   11690 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0317 10:45:03.274826   11690 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0317 10:45:03.274995   11690 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0317 10:45:03.291626   11690 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6a2ef9d4bc68 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9a:4d:91:26:57:2c} reservation:<nil>}
I0317 10:45:03.292135   11690 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00212c180}
I0317 10:45:03.292165   11690 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0317 10:45:03.292211   11690 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0317 10:45:03.339883   11690 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-905878 --network=existing-network
E0317 10:45:11.883446   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-905878 --network=existing-network: (22.448547266s)
helpers_test.go:175: Cleaning up "existing-network-905878" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-905878
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-905878: (1.877594298s)
I0317 10:45:27.683202   11690 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.46s)

                                                
                                    
x
+
TestKicCustomSubnet (23.32s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-963656 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-963656 --subnet=192.168.60.0/24: (21.300000121s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-963656 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-963656" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-963656
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-963656: (2.005433396s)
--- PASS: TestKicCustomSubnet (23.32s)

                                                
                                    
x
+
TestKicStaticIP (22.61s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-190146 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-190146 --static-ip=192.168.200.200: (20.483964339s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-190146 ip
helpers_test.go:175: Cleaning up "static-ip-190146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-190146
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-190146: (2.008349309s)
--- PASS: TestKicStaticIP (22.61s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (45.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-664452 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-664452 --driver=docker  --container-runtime=containerd: (19.809391442s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-677469 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-677469 --driver=docker  --container-runtime=containerd: (20.295474411s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-664452
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-677469
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-677469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-677469
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-677469: (1.822691856s)
helpers_test.go:175: Cleaning up "first-664452" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-664452
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-664452: (2.154051889s)
--- PASS: TestMinikubeProfile (45.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-351579 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-351579 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.965824488s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-351579 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-363001 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-363001 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.426782763s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-363001 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-351579 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-351579 --alsologtostderr -v=5: (1.577325683s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-363001 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-363001
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-363001: (1.165328595s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.63s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-363001
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-363001: (6.632022721s)
--- PASS: TestMountStart/serial/RestartStopped (7.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-363001 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (62.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-967871 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-967871 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m2.229616708s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (62.70s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (15.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-967871 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-967871 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-967871 -- rollout status deployment/busybox: (14.290896506s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-967871 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-967871 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-967871 -- exec busybox-58667487b6-75mk5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-967871 -- exec busybox-58667487b6-nb5f8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-967871 -- exec busybox-58667487b6-75mk5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-967871 -- exec busybox-58667487b6-nb5f8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-967871 -- exec busybox-58667487b6-75mk5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-967871 -- exec busybox-58667487b6-nb5f8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (15.59s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-967871 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-967871 -- exec busybox-58667487b6-75mk5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-967871 -- exec busybox-58667487b6-75mk5 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-967871 -- exec busybox-58667487b6-nb5f8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-967871 -- exec busybox-58667487b6-nb5f8 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-967871 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-967871 -v 3 --alsologtostderr: (14.42938062s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-967871 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 cp testdata/cp-test.txt multinode-967871:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 ssh -n multinode-967871 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 cp multinode-967871:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile550109759/001/cp-test_multinode-967871.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 ssh -n multinode-967871 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 cp multinode-967871:/home/docker/cp-test.txt multinode-967871-m02:/home/docker/cp-test_multinode-967871_multinode-967871-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 ssh -n multinode-967871 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 ssh -n multinode-967871-m02 "sudo cat /home/docker/cp-test_multinode-967871_multinode-967871-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 cp multinode-967871:/home/docker/cp-test.txt multinode-967871-m03:/home/docker/cp-test_multinode-967871_multinode-967871-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 ssh -n multinode-967871 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 ssh -n multinode-967871-m03 "sudo cat /home/docker/cp-test_multinode-967871_multinode-967871-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 cp testdata/cp-test.txt multinode-967871-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 ssh -n multinode-967871-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 cp multinode-967871-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile550109759/001/cp-test_multinode-967871-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 ssh -n multinode-967871-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 cp multinode-967871-m02:/home/docker/cp-test.txt multinode-967871:/home/docker/cp-test_multinode-967871-m02_multinode-967871.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 ssh -n multinode-967871-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 ssh -n multinode-967871 "sudo cat /home/docker/cp-test_multinode-967871-m02_multinode-967871.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 cp multinode-967871-m02:/home/docker/cp-test.txt multinode-967871-m03:/home/docker/cp-test_multinode-967871-m02_multinode-967871-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 ssh -n multinode-967871-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 ssh -n multinode-967871-m03 "sudo cat /home/docker/cp-test_multinode-967871-m02_multinode-967871-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 cp testdata/cp-test.txt multinode-967871-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 ssh -n multinode-967871-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 cp multinode-967871-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile550109759/001/cp-test_multinode-967871-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 ssh -n multinode-967871-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 cp multinode-967871-m03:/home/docker/cp-test.txt multinode-967871:/home/docker/cp-test_multinode-967871-m03_multinode-967871.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 ssh -n multinode-967871-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 ssh -n multinode-967871 "sudo cat /home/docker/cp-test_multinode-967871-m03_multinode-967871.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 cp multinode-967871-m03:/home/docker/cp-test.txt multinode-967871-m02:/home/docker/cp-test_multinode-967871-m03_multinode-967871-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 ssh -n multinode-967871-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 ssh -n multinode-967871-m02 "sudo cat /home/docker/cp-test_multinode-967871-m03_multinode-967871-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-967871 node stop m03: (1.168529808s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-967871 status: exit status 7 (457.76696ms)

                                                
                                                
-- stdout --
	multinode-967871
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-967871-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-967871-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-967871 status --alsologtostderr: exit status 7 (446.947108ms)

                                                
                                                
-- stdout --
	multinode-967871
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-967871-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-967871-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 10:49:10.603035  165551 out.go:345] Setting OutFile to fd 1 ...
	I0317 10:49:10.603618  165551 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:49:10.603643  165551 out.go:358] Setting ErrFile to fd 2...
	I0317 10:49:10.603655  165551 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:49:10.603867  165551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
	I0317 10:49:10.604018  165551 out.go:352] Setting JSON to false
	I0317 10:49:10.604059  165551 mustload.go:65] Loading cluster: multinode-967871
	I0317 10:49:10.604175  165551 notify.go:220] Checking for updates...
	I0317 10:49:10.604554  165551 config.go:182] Loaded profile config "multinode-967871": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 10:49:10.604581  165551 status.go:174] checking status of multinode-967871 ...
	I0317 10:49:10.605029  165551 cli_runner.go:164] Run: docker container inspect multinode-967871 --format={{.State.Status}}
	I0317 10:49:10.622906  165551 status.go:371] multinode-967871 host status = "Running" (err=<nil>)
	I0317 10:49:10.622930  165551 host.go:66] Checking if "multinode-967871" exists ...
	I0317 10:49:10.623226  165551 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-967871
	I0317 10:49:10.638557  165551 host.go:66] Checking if "multinode-967871" exists ...
	I0317 10:49:10.638788  165551 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 10:49:10.638830  165551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-967871
	I0317 10:49:10.654372  165551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/multinode-967871/id_rsa Username:docker}
	I0317 10:49:10.743978  165551 ssh_runner.go:195] Run: systemctl --version
	I0317 10:49:10.747678  165551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 10:49:10.758218  165551 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 10:49:10.805171  165551 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:63 SystemTime:2025-03-17 10:49:10.796600772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 10:49:10.805704  165551 kubeconfig.go:125] found "multinode-967871" server: "https://192.168.67.2:8443"
	I0317 10:49:10.805754  165551 api_server.go:166] Checking apiserver status ...
	I0317 10:49:10.805804  165551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 10:49:10.815811  165551 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup
	I0317 10:49:10.824119  165551 api_server.go:182] apiserver freezer: "10:freezer:/docker/cdd8134b98f910cf778f2d403dba278b3d08879fdff0932515b9cc189c42ae01/kubepods/burstable/pod0c9c6df30166441b2592b6b71cf4cffe/f7d1eb95dbed19a413f16786e31615592e9c38c035324871a56fc2964c835038"
	I0317 10:49:10.824176  165551 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cdd8134b98f910cf778f2d403dba278b3d08879fdff0932515b9cc189c42ae01/kubepods/burstable/pod0c9c6df30166441b2592b6b71cf4cffe/f7d1eb95dbed19a413f16786e31615592e9c38c035324871a56fc2964c835038/freezer.state
	I0317 10:49:10.831402  165551 api_server.go:204] freezer state: "THAWED"
	I0317 10:49:10.831427  165551 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0317 10:49:10.835057  165551 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0317 10:49:10.835078  165551 status.go:463] multinode-967871 apiserver status = Running (err=<nil>)
	I0317 10:49:10.835090  165551 status.go:176] multinode-967871 status: &{Name:multinode-967871 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 10:49:10.835112  165551 status.go:174] checking status of multinode-967871-m02 ...
	I0317 10:49:10.835436  165551 cli_runner.go:164] Run: docker container inspect multinode-967871-m02 --format={{.State.Status}}
	I0317 10:49:10.852745  165551 status.go:371] multinode-967871-m02 host status = "Running" (err=<nil>)
	I0317 10:49:10.852771  165551 host.go:66] Checking if "multinode-967871-m02" exists ...
	I0317 10:49:10.853069  165551 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-967871-m02
	I0317 10:49:10.869986  165551 host.go:66] Checking if "multinode-967871-m02" exists ...
	I0317 10:49:10.870370  165551 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 10:49:10.870415  165551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-967871-m02
	I0317 10:49:10.886010  165551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/multinode-967871-m02/id_rsa Username:docker}
	I0317 10:49:10.976051  165551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 10:49:10.986565  165551 status.go:176] multinode-967871-m02 status: &{Name:multinode-967871-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0317 10:49:10.986604  165551 status.go:174] checking status of multinode-967871-m03 ...
	I0317 10:49:10.986956  165551 cli_runner.go:164] Run: docker container inspect multinode-967871-m03 --format={{.State.Status}}
	I0317 10:49:11.003920  165551 status.go:371] multinode-967871-m03 host status = "Stopped" (err=<nil>)
	I0317 10:49:11.003940  165551 status.go:384] host is not running, skipping remaining checks
	I0317 10:49:11.003948  165551 status.go:176] multinode-967871-m03 status: &{Name:multinode-967871-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.07s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 node start m03 -v=7 --alsologtostderr
E0317 10:49:14.789170   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-967871 node start m03 -v=7 --alsologtostderr: (7.674621695s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (77.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-967871
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-967871
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-967871: (24.685619837s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-967871 --wait=true -v=8 --alsologtostderr
E0317 10:49:44.177887   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-967871 --wait=true -v=8 --alsologtostderr: (53.075242776s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-967871
--- PASS: TestMultiNode/serial/RestartKeepsNodes (77.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 node delete m03
E0317 10:50:37.858026   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-967871 node delete m03: (4.371104841s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-967871 stop: (23.572964369s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-967871 status: exit status 7 (79.717379ms)

                                                
                                                
-- stdout --
	multinode-967871
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-967871-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-967871 status --alsologtostderr: exit status 7 (81.296138ms)

                                                
                                                
-- stdout --
	multinode-967871
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-967871-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 10:51:05.809839  175217 out.go:345] Setting OutFile to fd 1 ...
	I0317 10:51:05.810099  175217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:51:05.810108  175217 out.go:358] Setting ErrFile to fd 2...
	I0317 10:51:05.810113  175217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:51:05.810342  175217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
	I0317 10:51:05.810530  175217 out.go:352] Setting JSON to false
	I0317 10:51:05.810560  175217 mustload.go:65] Loading cluster: multinode-967871
	I0317 10:51:05.810660  175217 notify.go:220] Checking for updates...
	I0317 10:51:05.811035  175217 config.go:182] Loaded profile config "multinode-967871": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 10:51:05.811055  175217 status.go:174] checking status of multinode-967871 ...
	I0317 10:51:05.811572  175217 cli_runner.go:164] Run: docker container inspect multinode-967871 --format={{.State.Status}}
	I0317 10:51:05.829828  175217 status.go:371] multinode-967871 host status = "Stopped" (err=<nil>)
	I0317 10:51:05.829857  175217 status.go:384] host is not running, skipping remaining checks
	I0317 10:51:05.829865  175217 status.go:176] multinode-967871 status: &{Name:multinode-967871 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 10:51:05.829898  175217 status.go:174] checking status of multinode-967871-m02 ...
	I0317 10:51:05.830215  175217 cli_runner.go:164] Run: docker container inspect multinode-967871-m02 --format={{.State.Status}}
	I0317 10:51:05.847228  175217 status.go:371] multinode-967871-m02 host status = "Stopped" (err=<nil>)
	I0317 10:51:05.847263  175217 status.go:384] host is not running, skipping remaining checks
	I0317 10:51:05.847272  175217 status.go:176] multinode-967871-m02 status: &{Name:multinode-967871-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (44.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-967871 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-967871 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (44.292708294s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-967871 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (44.84s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (21.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-967871
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-967871-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-967871-m02 --driver=docker  --container-runtime=containerd: exit status 14 (61.296005ms)

                                                
                                                
-- stdout --
	* [multinode-967871-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20535
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-967871-m02' is duplicated with machine name 'multinode-967871-m02' in profile 'multinode-967871'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-967871-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-967871-m03 --driver=docker  --container-runtime=containerd: (19.500800915s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-967871
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-967871: exit status 80 (263.713859ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-967871 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-967871-m03 already exists in multinode-967871-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-967871-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-967871-m03: (1.828177782s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (21.70s)

                                                
                                    
x
+
TestPreload (111.61s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-386661 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-386661 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m10.563345998s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-386661 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-386661 image pull gcr.io/k8s-minikube/busybox: (2.247936138s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-386661
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-386661: (11.882989412s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-386661 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-386661 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (24.429944957s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-386661 image list
helpers_test.go:175: Cleaning up "test-preload-386661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-386661
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-386661: (2.254833808s)
--- PASS: TestPreload (111.61s)

                                                
                                    
x
+
TestScheduledStopUnix (94.94s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-757982 --memory=2048 --driver=docker  --container-runtime=containerd
E0317 10:54:14.789186   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-757982 --memory=2048 --driver=docker  --container-runtime=containerd: (18.596402835s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-757982 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-757982 -n scheduled-stop-757982
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-757982 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0317 10:54:26.792333   11690 retry.go:31] will retry after 135.131µs: open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/scheduled-stop-757982/pid: no such file or directory
I0317 10:54:26.793491   11690 retry.go:31] will retry after 224.749µs: open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/scheduled-stop-757982/pid: no such file or directory
I0317 10:54:26.794613   11690 retry.go:31] will retry after 197.64µs: open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/scheduled-stop-757982/pid: no such file or directory
I0317 10:54:26.795730   11690 retry.go:31] will retry after 464.293µs: open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/scheduled-stop-757982/pid: no such file or directory
I0317 10:54:26.796842   11690 retry.go:31] will retry after 647.315µs: open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/scheduled-stop-757982/pid: no such file or directory
I0317 10:54:26.797954   11690 retry.go:31] will retry after 484.893µs: open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/scheduled-stop-757982/pid: no such file or directory
I0317 10:54:26.799065   11690 retry.go:31] will retry after 1.508612ms: open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/scheduled-stop-757982/pid: no such file or directory
I0317 10:54:26.801252   11690 retry.go:31] will retry after 2.299425ms: open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/scheduled-stop-757982/pid: no such file or directory
I0317 10:54:26.804455   11690 retry.go:31] will retry after 1.829631ms: open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/scheduled-stop-757982/pid: no such file or directory
I0317 10:54:26.806646   11690 retry.go:31] will retry after 2.170118ms: open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/scheduled-stop-757982/pid: no such file or directory
I0317 10:54:26.809841   11690 retry.go:31] will retry after 5.584019ms: open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/scheduled-stop-757982/pid: no such file or directory
I0317 10:54:26.816048   11690 retry.go:31] will retry after 8.615405ms: open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/scheduled-stop-757982/pid: no such file or directory
I0317 10:54:26.825233   11690 retry.go:31] will retry after 15.618357ms: open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/scheduled-stop-757982/pid: no such file or directory
I0317 10:54:26.841458   11690 retry.go:31] will retry after 15.483436ms: open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/scheduled-stop-757982/pid: no such file or directory
I0317 10:54:26.857732   11690 retry.go:31] will retry after 28.373508ms: open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/scheduled-stop-757982/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-757982 --cancel-scheduled
E0317 10:54:44.184672   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-757982 -n scheduled-stop-757982
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-757982
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-757982 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-757982
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-757982: exit status 7 (64.228374ms)

                                                
                                                
-- stdout --
	scheduled-stop-757982
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-757982 -n scheduled-stop-757982
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-757982 -n scheduled-stop-757982: exit status 7 (65.533815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-757982" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-757982
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-757982: (5.05510789s)
--- PASS: TestScheduledStopUnix (94.94s)

                                                
                                    
x
+
TestInsufficientStorage (9.16s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-174769 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-174769 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (6.872529049s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"98b113c1-15e9-4f0a-a03d-a4bd715f3ece","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-174769] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"09cd4da1-a9bc-4040-8be8-cfecc725a79d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20535"}}
	{"specversion":"1.0","id":"1c605ee9-8d11-4137-8151-4c545579d53d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c76dbab4-3620-47f0-b0f8-c141b7fb58e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig"}}
	{"specversion":"1.0","id":"5ac74e33-80e0-48c6-b078-3286e52e2c6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube"}}
	{"specversion":"1.0","id":"febf0729-caee-45d2-b8d8-2ab5ed2ed3c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"52b83952-2cf8-4077-8642-55faa24334b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"397ac5cf-f3e8-4760-9e4d-4d919409708d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"4e5fe5ae-9935-4481-a26f-bbe3259f90f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f61e9c94-43d2-48d7-a80e-b547068c64cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"949c45f8-74a4-48ed-bf3b-99347b39b150","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"7cb348be-ada5-488c-8b88-a373b85e8bed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-174769\" primary control-plane node in \"insufficient-storage-174769\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"404d05d6-d7e9-49a5-b7ef-f7d029ec8ef0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46-1741860993-20523 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2a216828-a30a-41d1-af29-d9279a5c4599","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4c333982-4e35-4bc1-a7ab-32eef5404162","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-174769 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-174769 --output=json --layout=cluster: exit status 7 (255.019424ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-174769","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-174769","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0317 10:55:49.863904  198186 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-174769" does not appear in /home/jenkins/minikube-integration/20535-4918/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-174769 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-174769 --output=json --layout=cluster: exit status 7 (250.57893ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-174769","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-174769","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0317 10:55:50.114924  198284 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-174769" does not appear in /home/jenkins/minikube-integration/20535-4918/kubeconfig
	E0317 10:55:50.124427  198284 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/insufficient-storage-174769/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-174769" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-174769
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-174769: (1.783037444s)
--- PASS: TestInsufficientStorage (9.16s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (150.11s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3126927013 start -p running-upgrade-443193 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0317 10:56:07.245199   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3126927013 start -p running-upgrade-443193 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m49.194311886s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-443193 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-443193 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.058277847s)
helpers_test.go:175: Cleaning up "running-upgrade-443193" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-443193
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-443193: (2.389717615s)
--- PASS: TestRunningBinaryUpgrade (150.11s)

                                                
                                    
x
+
TestKubernetesUpgrade (318.16s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-038579 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-038579 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (44.913831247s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-038579
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-038579: (1.661182666s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-038579 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-038579 status --format={{.Host}}: exit status 7 (63.030659ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-038579 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-038579 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m24.570687676s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-038579 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-038579 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-038579 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (60.677337ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-038579] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20535
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-038579
	    minikube start -p kubernetes-upgrade-038579 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0385792 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-038579 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-038579 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-038579 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4.687668542s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-038579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-038579
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-038579: (2.148162328s)
--- PASS: TestKubernetesUpgrade (318.16s)

                                                
                                    
x
+
TestMissingContainerUpgrade (175.24s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3807894616 start -p missing-upgrade-397855 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3807894616 start -p missing-upgrade-397855 --memory=2200 --driver=docker  --container-runtime=containerd: (1m52.548591648s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-397855
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-397855: (10.413176312s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-397855
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-397855 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-397855 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (47.571605114s)
helpers_test.go:175: Cleaning up "missing-upgrade-397855" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-397855
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-397855: (2.278524108s)
--- PASS: TestMissingContainerUpgrade (175.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-415587 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-415587 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (66.781104ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-415587] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20535
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (24.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-415587 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-415587 --driver=docker  --container-runtime=containerd: (24.630107311s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-415587 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (24.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-415587 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-415587 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.787455639s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-415587 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-415587 status -o json: exit status 2 (261.737292ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-415587","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-415587
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-415587: (1.819652139s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-415587 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-415587 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.650237152s)
--- PASS: TestNoKubernetes/serial/Start (5.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-415587 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-415587 "sudo systemctl is-active --quiet service kubelet": exit status 1 (298.219918ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-415587
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-415587: (1.223289139s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-415587 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-415587 --driver=docker  --container-runtime=containerd: (7.475254167s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-415587 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-415587 "sudo systemctl is-active --quiet service kubelet": exit status 1 (294.639623ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-236437 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-236437 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (139.126332ms)

                                                
                                                
-- stdout --
	* [false-236437] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20535
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 10:56:55.336648  213199 out.go:345] Setting OutFile to fd 1 ...
	I0317 10:56:55.337131  213199 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:56:55.337170  213199 out.go:358] Setting ErrFile to fd 2...
	I0317 10:56:55.337186  213199 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:56:55.337725  213199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
	I0317 10:56:55.338352  213199 out.go:352] Setting JSON to false
	I0317 10:56:55.339478  213199 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2308,"bootTime":1742206707,"procs":402,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 10:56:55.339571  213199 start.go:139] virtualization: kvm guest
	I0317 10:56:55.341743  213199 out.go:177] * [false-236437] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 10:56:55.343057  213199 notify.go:220] Checking for updates...
	I0317 10:56:55.343094  213199 out.go:177]   - MINIKUBE_LOCATION=20535
	I0317 10:56:55.344359  213199 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 10:56:55.345421  213199 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
	I0317 10:56:55.346545  213199 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
	I0317 10:56:55.347729  213199 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 10:56:55.348916  213199 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 10:56:55.350605  213199 config.go:182] Loaded profile config "force-systemd-env-616333": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 10:56:55.350726  213199 config.go:182] Loaded profile config "missing-upgrade-397855": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0317 10:56:55.350844  213199 config.go:182] Loaded profile config "running-upgrade-443193": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0317 10:56:55.350948  213199 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 10:56:55.373622  213199 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 10:56:55.373751  213199 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 10:56:55.422415  213199 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:58 SystemTime:2025-03-17 10:56:55.413324842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 10:56:55.422523  213199 docker.go:318] overlay module found
	I0317 10:56:55.424546  213199 out.go:177] * Using the docker driver based on user configuration
	I0317 10:56:55.425669  213199 start.go:297] selected driver: docker
	I0317 10:56:55.425683  213199 start.go:901] validating driver "docker" against <nil>
	I0317 10:56:55.425695  213199 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 10:56:55.427796  213199 out.go:201] 
	W0317 10:56:55.429098  213199 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0317 10:56:55.430388  213199 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-236437 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-236437

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-236437

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-236437

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-236437

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-236437

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-236437

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-236437

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-236437

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-236437

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-236437

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-236437

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-236437" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-236437" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-236437

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236437"

                                                
                                                
----------------------- debugLogs end: false-236437 [took: 2.690606485s] --------------------------------
helpers_test.go:175: Cleaning up "false-236437" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-236437
--- PASS: TestNetworkPlugins/group/false (2.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (85.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3077919274 start -p stopped-upgrade-873690 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3077919274 start -p stopped-upgrade-873690 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (22.894830222s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3077919274 -p stopped-upgrade-873690 stop
E0317 10:59:14.789329   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3077919274 -p stopped-upgrade-873690 stop: (19.852307332s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-873690 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0317 10:59:44.177653   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-873690 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (42.894558184s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (85.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-873690
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (779.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-236437 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-236437 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (12m59.629839493s)
--- PASS: TestNetworkPlugins/group/auto/Start (779.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (36.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-236437 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0317 11:09:14.788520   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-236437 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (36.408543717s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (36.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-236437 "pgrep -a kubelet"
I0317 11:09:43.682072   11690 config.go:182] Loaded profile config "custom-flannel-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-236437 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-rwwcr" [172cc1ac-abe9-4e4d-b409-94217083e359] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0317 11:09:44.177572   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-rwwcr" [172cc1ac-abe9-4e4d-b409-94217083e359] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.003695502s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-236437 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-236437 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-236437 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (38.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-236437 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-236437 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (38.800161253s)
--- PASS: TestNetworkPlugins/group/flannel/Start (38.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-tczbj" [b91b0b89-50cc-42c4-b81a-117736520a3d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00300518s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-236437 "pgrep -a kubelet"
I0317 11:10:56.336402   11690 config.go:182] Loaded profile config "flannel-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-236437 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-j5mwp" [d2bf26da-4a85-4e21-aa6f-c861eca2964d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-j5mwp" [d2bf26da-4a85-4e21-aa6f-c861eca2964d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003653934s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-236437 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-236437 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-236437 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (65.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-236437 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-236437 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m5.31437964s)
--- PASS: TestNetworkPlugins/group/bridge/Start (65.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-236437 "pgrep -a kubelet"
I0317 11:12:29.768950   11690 config.go:182] Loaded profile config "bridge-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-236437 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-vp5f6" [2b330b4f-1e6c-4fc1-ba97-3068d522680c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-vp5f6" [2b330b4f-1e6c-4fc1-ba97-3068d522680c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004196486s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-236437 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-236437 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-236437 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-236437 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-236437 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m11.090757749s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-236437 "pgrep -a kubelet"
I0317 11:13:19.330341   11690 config.go:182] Loaded profile config "auto-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-236437 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-ntjmd" [b45acad4-8d2e-4308-9145-a3a2bdffc7af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-ntjmd" [b45acad4-8d2e-4308-9145-a3a2bdffc7af] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.003990331s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-236437 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-236437 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-236437 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-236437 "pgrep -a kubelet"
I0317 11:14:08.163509   11690 config.go:182] Loaded profile config "enable-default-cni-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-236437 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-hg9p2" [6d16d01c-d113-41d8-9cb1-ecc2678383c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-hg9p2" [6d16d01c-d113-41d8-9cb1-ecc2678383c7] Running
E0317 11:14:14.788524   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003411801s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-236437 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-236437 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-236437 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (408.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-702762 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [57820910-1156-4bd5-9ad3-864b971494cb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0317 11:24:36.033947   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/enable-default-cni-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:24:43.849831   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/custom-flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:24:44.177131   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [57820910-1156-4bd5-9ad3-864b971494cb] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 6m48.003817187s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-702762 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (408.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (421.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-189670 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d5b4d701-c4ad-40ce-a141-23e6bbb6b137] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0317 11:25:50.079730   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:27:29.943704   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/bridge-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:28:19.528395   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/auto-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:29:08.330240   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/enable-default-cni-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:29:14.788843   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/addons-712202/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [d5b4d701-c4ad-40ce-a141-23e6bbb6b137] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7m1.004016594s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-189670 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (421.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (25.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-974033 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-974033 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: (25.064817704s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (25.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (162.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-627203 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [104dca4d-45d8-4ba1-aac6-5a526d653d45] Pending
helpers_test.go:344: "busybox" [104dca4d-45d8-4ba1-aac6-5a526d653d45] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0317 11:29:43.850023   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/custom-flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
E0317 11:29:44.177542   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/functional-793863/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [104dca4d-45d8-4ba1-aac6-5a526d653d45] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 2m42.003763143s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-627203 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (162.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-974033 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-974033 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-974033 --alsologtostderr -v=3: (1.251403857s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-974033 -n newest-cni-974033
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-974033 -n newest-cni-974033: exit status 7 (66.018392ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-974033 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-974033 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-974033 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: (12.306509229s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-974033 -n newest-cni-974033
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-974033 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-974033 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-974033 -n newest-cni-974033
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-974033 -n newest-cni-974033: exit status 2 (285.099498ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-974033 -n newest-cni-974033
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-974033 -n newest-cni-974033: exit status 2 (289.807663ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-974033 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-974033 -n newest-cni-974033
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-974033 -n newest-cni-974033
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-893723 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
E0317 11:30:50.080486   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-893723 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: (40.922626405s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-893723 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [492ad289-e0ab-4021-82a8-76223df1d04c] Pending
E0317 11:31:06.917627   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/custom-flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [492ad289-e0ab-4021-82a8-76223df1d04c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [492ad289-e0ab-4021-82a8-76223df1d04c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.002973549s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-893723 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-893723 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-893723 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-893723 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-893723 --alsologtostderr -v=3: (12.066402367s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-702762 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-702762 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-702762 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-702762 --alsologtostderr -v=3: (11.922193901s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-893723 -n embed-certs-893723
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-893723 -n embed-certs-893723: exit status 7 (65.732869ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-893723 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (263.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-893723 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-893723 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: (4m22.720407593s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-893723 -n embed-certs-893723
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (263.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-702762 -n old-k8s-version-702762
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-702762 -n old-k8s-version-702762: exit status 7 (64.745305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-702762 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (131.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-702762 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-702762 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m10.944668708s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-702762 -n old-k8s-version-702762
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (131.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-189670 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-189670 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-189670 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-189670 --alsologtostderr -v=3: (12.257959312s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-189670 -n no-preload-189670
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-189670 -n no-preload-189670: exit status 7 (82.416003ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-189670 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-189670 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
E0317 11:32:13.144450   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/flannel-236437/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-189670 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: (4m21.732759989s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-189670 -n no-preload-189670
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-627203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-627203 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-627203 --alsologtostderr -v=3
E0317 11:32:29.943396   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/bridge-236437/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-627203 --alsologtostderr -v=3: (12.076554147s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-627203 -n default-k8s-diff-port-627203
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-627203 -n default-k8s-diff-port-627203: exit status 7 (76.690042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-627203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-627203 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
E0317 11:33:19.528633   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/auto-236437/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-627203 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: (4m21.74286708s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-627203 -n default-k8s-diff-port-627203
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-55h4c" [aef0ec3b-a8ea-4a91-ba8f-c3127f3a31e3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003538505s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-55h4c" [aef0ec3b-a8ea-4a91-ba8f-c3127f3a31e3] Running
E0317 11:33:53.006588   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/bridge-236437/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003581438s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-702762 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-702762 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-702762 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-702762 -n old-k8s-version-702762
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-702762 -n old-k8s-version-702762: exit status 2 (303.402103ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-702762 -n old-k8s-version-702762
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-702762 -n old-k8s-version-702762: exit status 2 (286.353092ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-702762 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-702762 -n old-k8s-version-702762
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-702762 -n old-k8s-version-702762
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-6jtbp" [75a18578-77e2-48ae-ad60-12e52ed27bb0] Running
E0317 11:35:54.154190   11690 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/old-k8s-version-702762/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003016431s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-6jtbp" [75a18578-77e2-48ae-ad60-12e52ed27bb0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003261465s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-893723 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-893723 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-893723 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-893723 -n embed-certs-893723
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-893723 -n embed-certs-893723: exit status 2 (278.964012ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-893723 -n embed-certs-893723
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-893723 -n embed-certs-893723: exit status 2 (284.336783ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-893723 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-893723 -n embed-certs-893723
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-893723 -n embed-certs-893723
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-6nzs4" [16c29e21-ab0f-4236-885c-6c0e7acff6ab] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003435717s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-6nzs4" [16c29e21-ab0f-4236-885c-6c0e7acff6ab] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003380783s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-189670 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-189670 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-189670 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-189670 -n no-preload-189670
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-189670 -n no-preload-189670: exit status 2 (276.121516ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-189670 -n no-preload-189670
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-189670 -n no-preload-189670: exit status 2 (281.34728ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-189670 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-189670 -n no-preload-189670
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-189670 -n no-preload-189670
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-z68gk" [a855b2f4-c235-4c8b-b03e-4451848e7ea1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003922517s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-z68gk" [a855b2f4-c235-4c8b-b03e-4451848e7ea1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003896207s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-627203 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-627203 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-627203 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-627203 -n default-k8s-diff-port-627203
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-627203 -n default-k8s-diff-port-627203: exit status 2 (276.815891ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-627203 -n default-k8s-diff-port-627203
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-627203 -n default-k8s-diff-port-627203: exit status 2 (275.14702ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-627203 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-627203 -n default-k8s-diff-port-627203
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-627203 -n default-k8s-diff-port-627203
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.52s)

                                                
                                    

Test skip (25/312)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-236437 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-236437

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-236437

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-236437

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-236437

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-236437

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-236437

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-236437

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-236437

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-236437

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-236437

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-236437

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-236437" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-236437" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-236437

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236437"

                                                
                                                
----------------------- debugLogs end: kubenet-236437 [took: 2.937547452s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-236437" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-236437
--- SKIP: TestNetworkPlugins/group/kubenet (3.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-236437 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-236437

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-236437

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-236437

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-236437

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-236437

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-236437

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-236437

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-236437

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-236437

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-236437

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-236437

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-236437" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-236437

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-236437

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-236437

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-236437

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-236437" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-236437" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-236437

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-236437" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236437"

                                                
                                                
----------------------- debugLogs end: cilium-236437 [took: 3.47861714s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-236437" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-236437
--- SKIP: TestNetworkPlugins/group/cilium (3.66s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-581935" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-581935
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard