Test Report: Docker_Linux_containerd 21139

                    
                      acfd8b7155af18aff79ff1a575a474dfb6fd930f:2025-10-09:41835
                    
                

Test fail (1/333)

Order failed test Duration
276 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 2.75
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (2.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:91: Checking cache directory: /home/jenkins/minikube-integration/21139-140450/.minikube/cache/linux/amd64/v0.0.0
no_kubernetes_test.go:100: Cache directory exists but is empty
no_kubernetes_test.go:102: Cache directory /home/jenkins/minikube-integration/21139-140450/.minikube/cache/linux/amd64/v0.0.0 should not exist when using --no-kubernetes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect NoKubernetes-847951
helpers_test.go:243: (dbg) docker inspect NoKubernetes-847951:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "82b5b957bef9b0708792a124c7c9d08e8c1230220c43a406c8e1f0084b34f9d0",
	        "Created": "2025-10-09T18:28:17.65935633Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 343875,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:28:18.171337709Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/82b5b957bef9b0708792a124c7c9d08e8c1230220c43a406c8e1f0084b34f9d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/82b5b957bef9b0708792a124c7c9d08e8c1230220c43a406c8e1f0084b34f9d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/82b5b957bef9b0708792a124c7c9d08e8c1230220c43a406c8e1f0084b34f9d0/hosts",
	        "LogPath": "/var/lib/docker/containers/82b5b957bef9b0708792a124c7c9d08e8c1230220c43a406c8e1f0084b34f9d0/82b5b957bef9b0708792a124c7c9d08e8c1230220c43a406c8e1f0084b34f9d0-json.log",
	        "Name": "/NoKubernetes-847951",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "NoKubernetes-847951:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "NoKubernetes-847951",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "82b5b957bef9b0708792a124c7c9d08e8c1230220c43a406c8e1f0084b34f9d0",
	                "LowerDir": "/var/lib/docker/overlay2/ddb12026e9d292bd26e86ab0c9a1b530ad5c970a0ef39f3cd628266b2f8241f6-init/diff:/var/lib/docker/overlay2/2a598a362d6b1138dfd456c417c26d95545a2673435fc2114840f46031e2745b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ddb12026e9d292bd26e86ab0c9a1b530ad5c970a0ef39f3cd628266b2f8241f6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ddb12026e9d292bd26e86ab0c9a1b530ad5c970a0ef39f3cd628266b2f8241f6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ddb12026e9d292bd26e86ab0c9a1b530ad5c970a0ef39f3cd628266b2f8241f6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "NoKubernetes-847951",
	                "Source": "/var/lib/docker/volumes/NoKubernetes-847951/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "NoKubernetes-847951",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "NoKubernetes-847951",
	                "name.minikube.sigs.k8s.io": "NoKubernetes-847951",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "33870d0494c79d469c1251407f7661d1aff6a44e03f0a2348fa81b7bfd8b6fb1",
	            "SandboxKey": "/var/run/docker/netns/33870d0494c7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33003"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33004"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33007"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33005"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33006"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "NoKubernetes-847951": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:7a:1c:d4:a3:28",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c1b75b78bdfe92b60c75667900dc5720a91a1e6067b3792d7ee6eb086c6c84bc",
	                    "EndpointID": "b4afb3b50e57da612f1c0ed851978f88071ab6d0e65d8fcf32c0367b951ae3eb",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "NoKubernetes-847951",
	                        "82b5b957bef9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-847951 -n NoKubernetes-847951
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-847951 -n NoKubernetes-847951: exit status 6 (333.673619ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:28:22.718840  347425 status.go:458] kubeconfig endpoint: get endpoint: "NoKubernetes-847951" does not appear in /home/jenkins/minikube-integration/21139-140450/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-847951 logs -n 25
helpers_test.go:260: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-265552 sudo docker system info                                                                                                       │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo systemctl status cri-docker --all --full --no-pager                                                                      │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo systemctl cat cri-docker --no-pager                                                                                      │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                 │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                           │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo cri-dockerd --version                                                                                                    │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo systemctl status containerd --all --full --no-pager                                                                      │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo systemctl cat containerd --no-pager                                                                                      │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo cat /lib/systemd/system/containerd.service                                                                               │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo cat /etc/containerd/config.toml                                                                                          │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo containerd config dump                                                                                                   │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo systemctl status crio --all --full --no-pager                                                                            │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo systemctl cat crio --no-pager                                                                                            │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                  │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo crio config                                                                                                              │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ delete  │ -p cilium-265552                                                                                                                               │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │ 09 Oct 25 18:27 UTC │
	│ start   │ -p stopped-upgrade-729726 --memory=3072 --vm-driver=docker  --container-runtime=containerd                                                     │ stopped-upgrade-729726    │ jenkins │ v1.32.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ force-systemd-env-855890 ssh cat /etc/containerd/config.toml                                                                                   │ force-systemd-env-855890  │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │ 09 Oct 25 18:27 UTC │
	│ delete  │ -p force-systemd-env-855890                                                                                                                    │ force-systemd-env-855890  │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │ 09 Oct 25 18:27 UTC │
	│ start   │ -p NoKubernetes-847951 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                    │ NoKubernetes-847951       │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │ 09 Oct 25 18:28 UTC │
	│ start   │ -p missing-upgrade-552528 --memory=3072 --driver=docker  --container-runtime=containerd                                                        │ missing-upgrade-552528    │ jenkins │ v1.32.0 │ 09 Oct 25 18:27 UTC │                     │
	│ delete  │ -p offline-containerd-818450                                                                                                                   │ offline-containerd-818450 │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │ 09 Oct 25 18:28 UTC │
	│ start   │ -p kubernetes-upgrade-701596 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd │ kubernetes-upgrade-701596 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	│ delete  │ -p NoKubernetes-847951                                                                                                                         │ NoKubernetes-847951       │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ start   │ -p NoKubernetes-847951 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                    │ NoKubernetes-847951       │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:28:12
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:28:12.007319  341627 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:28:12.007625  341627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:28:12.007635  341627 out.go:374] Setting ErrFile to fd 2...
	I1009 18:28:12.007640  341627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:28:12.007913  341627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-140450/.minikube/bin
	I1009 18:28:12.008605  341627 out.go:368] Setting JSON to false
	I1009 18:28:12.009908  341627 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4232,"bootTime":1760030260,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:28:12.010018  341627 start.go:141] virtualization: kvm guest
	I1009 18:28:12.059871  341627 out.go:179] * [NoKubernetes-847951] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:28:12.061144  341627 notify.go:220] Checking for updates...
	I1009 18:28:12.061167  341627 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:28:12.062567  341627 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:28:12.064662  341627 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-140450/kubeconfig
	I1009 18:28:12.066106  341627 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-140450/.minikube
	I1009 18:28:12.070400  341627 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:28:12.071967  341627 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:28:12.073926  341627 config.go:182] Loaded profile config "kubernetes-upgrade-701596": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1009 18:28:12.074073  341627 config.go:182] Loaded profile config "missing-upgrade-552528": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1009 18:28:12.074212  341627 config.go:182] Loaded profile config "stopped-upgrade-729726": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1009 18:28:12.074247  341627 start.go:1899] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1009 18:28:12.074352  341627 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:28:12.101670  341627 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:28:12.101790  341627 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:28:12.176010  341627 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:57 OomKillDisable:false NGoroutines:99 SystemTime:2025-10-09 18:28:12.163396808 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:28:12.176211  341627 docker.go:318] overlay module found
	I1009 18:28:12.178500  341627 out.go:179] * Using the docker driver based on user configuration
	I1009 18:28:12.179539  341627 start.go:305] selected driver: docker
	I1009 18:28:12.179560  341627 start.go:925] validating driver "docker" against <nil>
	I1009 18:28:12.179575  341627 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:28:12.180357  341627 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:28:12.262551  341627 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-09 18:28:12.250822574 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:28:12.262694  341627 start.go:1899] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1009 18:28:12.262790  341627 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:28:12.263108  341627 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 18:28:12.264875  341627 out.go:179] * Using Docker driver with root privileges
	I1009 18:28:12.265922  341627 cni.go:84] Creating CNI manager for ""
	I1009 18:28:12.266018  341627 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1009 18:28:12.266033  341627 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:28:12.266066  341627 start.go:1899] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1009 18:28:12.266137  341627 start.go:349] cluster config:
	{Name:NoKubernetes-847951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-847951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:28:12.267282  341627 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-847951
	I1009 18:28:12.268317  341627 cache.go:133] Beginning downloading kic base image for docker with containerd
	I1009 18:28:12.269391  341627 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:28:12.270449  341627 cache.go:58] Skipping Kubernetes image caching due to --no-kubernetes flag
	I1009 18:28:12.270546  341627 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:28:12.270691  341627 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/NoKubernetes-847951/config.json ...
	I1009 18:28:12.270729  341627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/NoKubernetes-847951/config.json: {Name:mk8acae52a86147cd8ec6a24f9ad8611a87d36b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:12.294609  341627 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:28:12.294643  341627 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:28:12.294665  341627 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:28:12.294698  341627 start.go:360] acquireMachinesLock for NoKubernetes-847951: {Name:mkf32ae34eb47bcc7ba08a99cd03ce047ae6cf03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:28:12.294764  341627 start.go:364] duration metric: took 42.969µs to acquireMachinesLock for "NoKubernetes-847951"
	I1009 18:28:12.294789  341627 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-847951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-847951 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1009 18:28:12.294891  341627 start.go:125] createHost starting for "" (driver="docker")
	I1009 18:28:10.143298  335435 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 18:28:10.143610  335435 start.go:159] libmachine.API.Create for "stopped-upgrade-729726" (driver="docker")
	I1009 18:28:10.143638  335435 client.go:168] LocalClient.Create starting
	I1009 18:28:10.143710  335435 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem
	I1009 18:28:10.143747  335435 main.go:141] libmachine: Decoding PEM data...
	I1009 18:28:10.143771  335435 main.go:141] libmachine: Parsing certificate...
	I1009 18:28:10.143847  335435 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem
	I1009 18:28:10.143868  335435 main.go:141] libmachine: Decoding PEM data...
	I1009 18:28:10.143878  335435 main.go:141] libmachine: Parsing certificate...
	I1009 18:28:10.144381  335435 cli_runner.go:164] Run: docker network inspect stopped-upgrade-729726 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:28:10.166341  335435 cli_runner.go:211] docker network inspect stopped-upgrade-729726 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:28:10.166411  335435 network_create.go:281] running [docker network inspect stopped-upgrade-729726] to gather additional debugging logs...
	I1009 18:28:10.166427  335435 cli_runner.go:164] Run: docker network inspect stopped-upgrade-729726
	W1009 18:28:10.186602  335435 cli_runner.go:211] docker network inspect stopped-upgrade-729726 returned with exit code 1
	I1009 18:28:10.186629  335435 network_create.go:284] error running [docker network inspect stopped-upgrade-729726]: docker network inspect stopped-upgrade-729726: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network stopped-upgrade-729726 not found
	I1009 18:28:10.186646  335435 network_create.go:286] output of [docker network inspect stopped-upgrade-729726]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network stopped-upgrade-729726 not found
	
	** /stderr **
	I1009 18:28:10.186731  335435 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:28:10.206089  335435 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a776d4a7d86a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:a7:10:79:cc:07} reservation:<nil>}
	I1009 18:28:10.206981  335435 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-98ca10e9ecda IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:3b:88:20:02:72} reservation:<nil>}
	I1009 18:28:10.207801  335435 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a2287629eec3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:e5:92:f7:19:89} reservation:<nil>}
	I1009 18:28:10.208723  335435 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fd1a93c0c2b4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:e7:b5:31:47:49} reservation:<nil>}
	I1009 18:28:10.209420  335435 network.go:214] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-59758b7aed05 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:be:a1:3f:01:8e:55} reservation:<nil>}
	I1009 18:28:10.210324  335435 network.go:214] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-8f4ead0b5675 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:02:86:ee:81:3a:32} reservation:<nil>}
	I1009 18:28:10.211555  335435 network.go:209] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0027c9220}
	I1009 18:28:10.211577  335435 network_create.go:124] attempt to create docker network stopped-upgrade-729726 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1009 18:28:10.211641  335435 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=stopped-upgrade-729726 stopped-upgrade-729726
	I1009 18:28:10.694736  335435 network_create.go:108] docker network stopped-upgrade-729726 192.168.103.0/24 created
	I1009 18:28:10.694768  335435 kic.go:121] calculated static IP "192.168.103.2" for the "stopped-upgrade-729726" container
	I1009 18:28:10.694847  335435 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:28:10.716248  335435 cli_runner.go:164] Run: docker volume create stopped-upgrade-729726 --label name.minikube.sigs.k8s.io=stopped-upgrade-729726 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:28:10.784219  335435 oci.go:103] Successfully created a docker volume stopped-upgrade-729726
	I1009 18:28:10.784318  335435 cli_runner.go:164] Run: docker run --rm --name stopped-upgrade-729726-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-729726 --entrypoint /usr/bin/test -v stopped-upgrade-729726:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1009 18:28:09.319166  340368 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 18:28:09.319470  340368 start.go:159] libmachine.API.Create for "kubernetes-upgrade-701596" (driver="docker")
	I1009 18:28:09.319505  340368 client.go:168] LocalClient.Create starting
	I1009 18:28:09.319583  340368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem
	I1009 18:28:09.319616  340368 main.go:141] libmachine: Decoding PEM data...
	I1009 18:28:09.319638  340368 main.go:141] libmachine: Parsing certificate...
	I1009 18:28:09.319714  340368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem
	I1009 18:28:09.319755  340368 main.go:141] libmachine: Decoding PEM data...
	I1009 18:28:09.319780  340368 main.go:141] libmachine: Parsing certificate...
	I1009 18:28:09.320263  340368 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-701596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:28:09.337936  340368 cli_runner.go:211] docker network inspect kubernetes-upgrade-701596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:28:09.338003  340368 network_create.go:284] running [docker network inspect kubernetes-upgrade-701596] to gather additional debugging logs...
	I1009 18:28:09.338024  340368 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-701596
	W1009 18:28:09.354735  340368 cli_runner.go:211] docker network inspect kubernetes-upgrade-701596 returned with exit code 1
	I1009 18:28:09.354769  340368 network_create.go:287] error running [docker network inspect kubernetes-upgrade-701596]: docker network inspect kubernetes-upgrade-701596: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-701596 not found
	I1009 18:28:09.354790  340368 network_create.go:289] output of [docker network inspect kubernetes-upgrade-701596]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-701596 not found
	
	** /stderr **
	I1009 18:28:09.354880  340368 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:28:09.371446  340368 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a776d4a7d86a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:a7:10:79:cc:07} reservation:<nil>}
	I1009 18:28:09.371780  340368 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-98ca10e9ecda IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:3b:88:20:02:72} reservation:<nil>}
	I1009 18:28:09.372102  340368 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a2287629eec3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:e5:92:f7:19:89} reservation:<nil>}
	I1009 18:28:09.372592  340368 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d2eb80}
	I1009 18:28:09.372620  340368 network_create.go:124] attempt to create docker network kubernetes-upgrade-701596 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1009 18:28:09.372663  340368 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-701596 kubernetes-upgrade-701596
	I1009 18:28:09.428682  340368 network_create.go:108] docker network kubernetes-upgrade-701596 192.168.76.0/24 created
	I1009 18:28:09.428715  340368 kic.go:121] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-701596" container
	I1009 18:28:09.428798  340368 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:28:09.445416  340368 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-701596 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-701596 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:28:09.462547  340368 oci.go:103] Successfully created a docker volume kubernetes-upgrade-701596
	I1009 18:28:09.462619  340368 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-701596-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-701596 --entrypoint /usr/bin/test -v kubernetes-upgrade-701596:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 18:28:09.899857  340368 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-701596
	I1009 18:28:09.899931  340368 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1009 18:28:09.899942  340368 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:28:09.900002  340368 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-140450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-701596:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:28:09.948913  339253 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 18:28:09.949211  339253 start.go:159] libmachine.API.Create for "missing-upgrade-552528" (driver="docker")
	I1009 18:28:09.949242  339253 client.go:168] LocalClient.Create starting
	I1009 18:28:09.949308  339253 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem
	I1009 18:28:09.949346  339253 main.go:141] libmachine: Decoding PEM data...
	I1009 18:28:09.949364  339253 main.go:141] libmachine: Parsing certificate...
	I1009 18:28:09.949425  339253 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem
	I1009 18:28:09.949448  339253 main.go:141] libmachine: Decoding PEM data...
	I1009 18:28:09.949459  339253 main.go:141] libmachine: Parsing certificate...
	I1009 18:28:09.949856  339253 cli_runner.go:164] Run: docker network inspect missing-upgrade-552528 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:28:09.969391  339253 cli_runner.go:211] docker network inspect missing-upgrade-552528 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:28:09.969453  339253 network_create.go:281] running [docker network inspect missing-upgrade-552528] to gather additional debugging logs...
	I1009 18:28:09.969472  339253 cli_runner.go:164] Run: docker network inspect missing-upgrade-552528
	W1009 18:28:09.990468  339253 cli_runner.go:211] docker network inspect missing-upgrade-552528 returned with exit code 1
	I1009 18:28:09.990495  339253 network_create.go:284] error running [docker network inspect missing-upgrade-552528]: docker network inspect missing-upgrade-552528: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-552528 not found
	I1009 18:28:09.990511  339253 network_create.go:286] output of [docker network inspect missing-upgrade-552528]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-552528 not found
	
	** /stderr **
	I1009 18:28:09.990640  339253 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:28:10.010592  339253 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a776d4a7d86a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:a7:10:79:cc:07} reservation:<nil>}
	I1009 18:28:10.012318  339253 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-98ca10e9ecda IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:3b:88:20:02:72} reservation:<nil>}
	I1009 18:28:10.013513  339253 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a2287629eec3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:e5:92:f7:19:89} reservation:<nil>}
	I1009 18:28:10.014369  339253 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fd1a93c0c2b4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:e7:b5:31:47:49} reservation:<nil>}
	I1009 18:28:10.015263  339253 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024b27b0}
	I1009 18:28:10.015287  339253 network_create.go:124] attempt to create docker network missing-upgrade-552528 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1009 18:28:10.015345  339253 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-552528 missing-upgrade-552528
	I1009 18:28:10.086849  339253 network_create.go:108] docker network missing-upgrade-552528 192.168.85.0/24 created
	I1009 18:28:10.086880  339253 kic.go:121] calculated static IP "192.168.85.2" for the "missing-upgrade-552528" container
	I1009 18:28:10.086961  339253 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:28:10.108801  339253 cli_runner.go:164] Run: docker volume create missing-upgrade-552528 --label name.minikube.sigs.k8s.io=missing-upgrade-552528 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:28:10.138143  339253 oci.go:103] Successfully created a docker volume missing-upgrade-552528
	I1009 18:28:10.138220  339253 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-552528-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-552528 --entrypoint /usr/bin/test -v missing-upgrade-552528:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1009 18:28:11.853928  339253 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-552528-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-552528 --entrypoint /usr/bin/test -v missing-upgrade-552528:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: (1.715664139s)
	I1009 18:28:11.853952  339253 oci.go:107] Successfully prepared a docker volume missing-upgrade-552528
	I1009 18:28:11.853979  339253 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1009 18:28:11.854004  339253 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:28:11.854084  339253 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-140450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-552528:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:28:12.298230  341627 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 18:28:12.298499  341627 start.go:159] libmachine.API.Create for "NoKubernetes-847951" (driver="docker")
	I1009 18:28:12.298541  341627 client.go:168] LocalClient.Create starting
	I1009 18:28:12.298650  341627 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem
	I1009 18:28:12.298705  341627 main.go:141] libmachine: Decoding PEM data...
	I1009 18:28:12.298729  341627 main.go:141] libmachine: Parsing certificate...
	I1009 18:28:12.298809  341627 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem
	I1009 18:28:12.298848  341627 main.go:141] libmachine: Decoding PEM data...
	I1009 18:28:12.298887  341627 main.go:141] libmachine: Parsing certificate...
	I1009 18:28:12.299343  341627 cli_runner.go:164] Run: docker network inspect NoKubernetes-847951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:28:12.321799  341627 cli_runner.go:211] docker network inspect NoKubernetes-847951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:28:12.321914  341627 network_create.go:284] running [docker network inspect NoKubernetes-847951] to gather additional debugging logs...
	I1009 18:28:12.321953  341627 cli_runner.go:164] Run: docker network inspect NoKubernetes-847951
	W1009 18:28:12.345187  341627 cli_runner.go:211] docker network inspect NoKubernetes-847951 returned with exit code 1
	I1009 18:28:12.345245  341627 network_create.go:287] error running [docker network inspect NoKubernetes-847951]: docker network inspect NoKubernetes-847951: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network NoKubernetes-847951 not found
	I1009 18:28:12.345267  341627 network_create.go:289] output of [docker network inspect NoKubernetes-847951]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network NoKubernetes-847951 not found
	
	** /stderr **
	I1009 18:28:12.345423  341627 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:28:12.368564  341627 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a776d4a7d86a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:a7:10:79:cc:07} reservation:<nil>}
	I1009 18:28:12.369169  341627 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-98ca10e9ecda IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:3b:88:20:02:72} reservation:<nil>}
	I1009 18:28:12.369762  341627 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a2287629eec3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:e5:92:f7:19:89} reservation:<nil>}
	I1009 18:28:12.370399  341627 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fd1a93c0c2b4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:e7:b5:31:47:49} reservation:<nil>}
	I1009 18:28:12.370768  341627 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-59758b7aed05 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:be:a1:3f:01:8e:55} reservation:<nil>}
	I1009 18:28:12.371459  341627 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e080e0}
	I1009 18:28:12.371493  341627 network_create.go:124] attempt to create docker network NoKubernetes-847951 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1009 18:28:12.371548  341627 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-847951 NoKubernetes-847951
	I1009 18:28:12.814873  341627 network_create.go:108] docker network NoKubernetes-847951 192.168.94.0/24 created
	I1009 18:28:12.814915  341627 kic.go:121] calculated static IP "192.168.94.2" for the "NoKubernetes-847951" container
	I1009 18:28:12.814988  341627 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:28:12.837062  341627 cli_runner.go:164] Run: docker volume create NoKubernetes-847951 --label name.minikube.sigs.k8s.io=NoKubernetes-847951 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:28:13.064284  341627 oci.go:103] Successfully created a docker volume NoKubernetes-847951
	I1009 18:28:13.064423  341627 cli_runner.go:164] Run: docker run --rm --name NoKubernetes-847951-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-847951 --entrypoint /usr/bin/test -v NoKubernetes-847951:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 18:28:13.153579  335435 cli_runner.go:217] Completed: docker run --rm --name stopped-upgrade-729726-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-729726 --entrypoint /usr/bin/test -v stopped-upgrade-729726:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: (2.369215652s)
	I1009 18:28:13.153603  335435 oci.go:107] Successfully prepared a docker volume stopped-upgrade-729726
	I1009 18:28:13.153620  335435 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1009 18:28:13.153641  335435 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:28:13.153701  335435 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-140450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v stopped-upgrade-729726:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:28:15.825157  340368 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-140450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-701596:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (5.925079745s)
	I1009 18:28:15.825210  340368 kic.go:203] duration metric: took 5.925262454s to extract preloaded images to volume ...
	W1009 18:28:15.825320  340368 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:28:15.825363  340368 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:28:15.825412  340368 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:28:15.918621  340368 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-701596 --name kubernetes-upgrade-701596 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-701596 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-701596 --network kubernetes-upgrade-701596 --ip 192.168.76.2 --volume kubernetes-upgrade-701596:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 18:28:16.368638  340368 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-701596 --format={{.State.Running}}
	I1009 18:28:16.388560  340368 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-701596 --format={{.State.Status}}
	I1009 18:28:16.413494  340368 cli_runner.go:164] Run: docker exec kubernetes-upgrade-701596 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:28:16.492158  340368 oci.go:144] the created container "kubernetes-upgrade-701596" has a running status.
	I1009 18:28:16.492202  340368 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-140450/.minikube/machines/kubernetes-upgrade-701596/id_rsa...
	I1009 18:28:16.846798  340368 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-140450/.minikube/machines/kubernetes-upgrade-701596/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:28:16.993230  340368 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-701596 --format={{.State.Status}}
	I1009 18:28:17.022782  340368 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:28:17.022814  340368 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-701596 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:28:17.115017  340368 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-701596 --format={{.State.Status}}
	I1009 18:28:17.135631  340368 machine.go:93] provisionDockerMachine start ...
	I1009 18:28:17.135743  340368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-701596
	I1009 18:28:17.159004  340368 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:17.171868  340368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1009 18:28:17.171903  340368 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:28:17.319000  340368 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-701596
	
	I1009 18:28:17.319033  340368 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-701596"
	I1009 18:28:17.319088  340368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-701596
	I1009 18:28:17.336415  340368 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:17.336747  340368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1009 18:28:17.336774  340368 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-701596 && echo "kubernetes-upgrade-701596" | sudo tee /etc/hostname
	I1009 18:28:17.497508  340368 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-701596
	
	I1009 18:28:17.497620  340368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-701596
	I1009 18:28:17.516135  340368 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:17.516467  340368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1009 18:28:17.516516  340368 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-701596' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-701596/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-701596' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:28:17.672832  340368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:28:17.672892  340368 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-140450/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-140450/.minikube}
	I1009 18:28:17.672930  340368 ubuntu.go:190] setting up certificates
	I1009 18:28:17.672945  340368 provision.go:84] configureAuth start
	I1009 18:28:17.673019  340368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-701596
	I1009 18:28:17.690984  340368 provision.go:143] copyHostCerts
	I1009 18:28:17.691042  340368 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem, removing ...
	I1009 18:28:17.691053  340368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem
	I1009 18:28:17.692170  340368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem (1675 bytes)
	I1009 18:28:17.692289  340368 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem, removing ...
	I1009 18:28:17.692300  340368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem
	I1009 18:28:17.692333  340368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem (1078 bytes)
	I1009 18:28:17.692415  340368 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem, removing ...
	I1009 18:28:17.692424  340368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem
	I1009 18:28:17.692455  340368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem (1123 bytes)
	I1009 18:28:17.692558  340368 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-701596 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-701596 localhost minikube]
	I1009 18:28:17.852316  340368 provision.go:177] copyRemoteCerts
	I1009 18:28:17.852392  340368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:28:17.852429  340368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-701596
	I1009 18:28:17.872228  340368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/kubernetes-upgrade-701596/id_rsa Username:docker}
	I1009 18:28:17.975081  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:28:18.096505  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1009 18:28:18.124391  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 18:28:18.149407  340368 provision.go:87] duration metric: took 476.437203ms to configureAuth
	I1009 18:28:18.149456  340368 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:28:18.149653  340368 config.go:182] Loaded profile config "kubernetes-upgrade-701596": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1009 18:28:18.149668  340368 machine.go:96] duration metric: took 1.014011919s to provisionDockerMachine
	I1009 18:28:18.149677  340368 client.go:171] duration metric: took 8.830161985s to LocalClient.Create
	I1009 18:28:18.149701  340368 start.go:167] duration metric: took 8.830233842s to libmachine.API.Create "kubernetes-upgrade-701596"
	I1009 18:28:18.149711  340368 start.go:293] postStartSetup for "kubernetes-upgrade-701596" (driver="docker")
	I1009 18:28:18.149723  340368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:28:18.149783  340368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:28:18.149829  340368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-701596
	I1009 18:28:18.184179  340368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/kubernetes-upgrade-701596/id_rsa Username:docker}
	I1009 18:28:18.310490  340368 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:28:18.315729  340368 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:28:18.315769  340368 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:28:18.315785  340368 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-140450/.minikube/addons for local assets ...
	I1009 18:28:18.315848  340368 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-140450/.minikube/files for local assets ...
	I1009 18:28:18.315952  340368 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem -> 1440942.pem in /etc/ssl/certs
	I1009 18:28:18.316089  340368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:28:18.333291  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem --> /etc/ssl/certs/1440942.pem (1708 bytes)
	I1009 18:28:18.367762  340368 start.go:296] duration metric: took 218.032903ms for postStartSetup
	I1009 18:28:18.376358  340368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-701596
	I1009 18:28:18.396542  340368 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/config.json ...
	I1009 18:28:18.424372  340368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:28:18.424431  340368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-701596
	I1009 18:28:18.447483  340368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/kubernetes-upgrade-701596/id_rsa Username:docker}
	I1009 18:28:18.553160  340368 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:28:18.558274  340368 start.go:128] duration metric: took 9.240574027s to createHost
	I1009 18:28:18.558297  340368 start.go:83] releasing machines lock for "kubernetes-upgrade-701596", held for 9.240709036s
	I1009 18:28:18.558374  340368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-701596
	I1009 18:28:18.576922  340368 ssh_runner.go:195] Run: cat /version.json
	I1009 18:28:18.576975  340368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:28:18.576987  340368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-701596
	I1009 18:28:18.577034  340368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-701596
	I1009 18:28:18.597580  340368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/kubernetes-upgrade-701596/id_rsa Username:docker}
	I1009 18:28:18.598975  340368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/kubernetes-upgrade-701596/id_rsa Username:docker}
	I1009 18:28:18.755610  340368 ssh_runner.go:195] Run: systemctl --version
	I1009 18:28:18.762180  340368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:28:18.786991  340368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:28:18.787073  340368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:28:18.828909  340368 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:28:18.828938  340368 start.go:495] detecting cgroup driver to use...
	I1009 18:28:18.828974  340368 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:28:18.829023  340368 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1009 18:28:18.851221  340368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 18:28:18.867806  340368 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:28:18.867867  340368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:28:18.893768  340368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:28:18.919847  340368 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:28:19.050477  340368 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:28:15.826500  339253 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-140450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-552528:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.972347642s)
	I1009 18:28:15.826549  339253 kic.go:203] duration metric: took 3.972541 seconds to extract preloaded images to volume
	W1009 18:28:15.826663  339253 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:28:15.826710  339253 oci.go:243] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:28:15.826757  339253 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:28:15.907780  339253 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-552528 --name missing-upgrade-552528 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-552528 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-552528 --network missing-upgrade-552528 --ip 192.168.85.2 --volume missing-upgrade-552528:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1009 18:28:16.261334  339253 cli_runner.go:164] Run: docker container inspect missing-upgrade-552528 --format={{.State.Running}}
	I1009 18:28:16.281588  339253 cli_runner.go:164] Run: docker container inspect missing-upgrade-552528 --format={{.State.Status}}
	I1009 18:28:16.301053  339253 cli_runner.go:164] Run: docker exec missing-upgrade-552528 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:28:16.345491  339253 oci.go:144] the created container "missing-upgrade-552528" has a running status.
	I1009 18:28:16.345534  339253 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-140450/.minikube/machines/missing-upgrade-552528/id_rsa...
	I1009 18:28:16.757480  339253 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-140450/.minikube/machines/missing-upgrade-552528/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:28:16.920931  339253 cli_runner.go:164] Run: docker container inspect missing-upgrade-552528 --format={{.State.Status}}
	I1009 18:28:16.941167  339253 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:28:16.941185  339253 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-552528 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:28:17.115036  339253 cli_runner.go:164] Run: docker container inspect missing-upgrade-552528 --format={{.State.Status}}
	I1009 18:28:17.136135  339253 machine.go:88] provisioning docker machine ...
	I1009 18:28:17.136176  339253 ubuntu.go:169] provisioning hostname "missing-upgrade-552528"
	I1009 18:28:17.136243  339253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-552528
	I1009 18:28:17.157035  339253 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:17.157628  339253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1009 18:28:17.157647  339253 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-552528 && echo "missing-upgrade-552528" | sudo tee /etc/hostname
	I1009 18:28:17.344875  339253 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-552528
	
	I1009 18:28:17.344957  339253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-552528
	I1009 18:28:17.365340  339253 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:17.365708  339253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1009 18:28:17.365729  339253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-552528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-552528/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-552528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:28:17.482876  339253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:28:17.482899  339253 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21139-140450/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-140450/.minikube}
	I1009 18:28:17.482943  339253 ubuntu.go:177] setting up certificates
	I1009 18:28:17.482963  339253 provision.go:83] configureAuth start
	I1009 18:28:17.483027  339253 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-552528
	I1009 18:28:17.504232  339253 provision.go:138] copyHostCerts
	I1009 18:28:17.504287  339253 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem, removing ...
	I1009 18:28:17.504293  339253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem
	I1009 18:28:17.507217  339253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem (1078 bytes)
	I1009 18:28:17.507365  339253 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem, removing ...
	I1009 18:28:17.507374  339253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem
	I1009 18:28:17.507423  339253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem (1123 bytes)
	I1009 18:28:17.507577  339253 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem, removing ...
	I1009 18:28:17.507584  339253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem
	I1009 18:28:17.507630  339253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem (1675 bytes)
	I1009 18:28:17.507709  339253 provision.go:112] generating server cert: /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-552528 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-552528]
	I1009 18:28:17.611411  339253 provision.go:172] copyRemoteCerts
	I1009 18:28:17.611469  339253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:28:17.611538  339253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-552528
	I1009 18:28:17.633641  339253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/missing-upgrade-552528/id_rsa Username:docker}
	I1009 18:28:17.724262  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 18:28:17.891979  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:28:17.942021  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1009 18:28:18.076680  339253 provision.go:86] duration metric: configureAuth took 593.701508ms
	I1009 18:28:18.076698  339253 ubuntu.go:193] setting minikube options for container-runtime
	I1009 18:28:18.076917  339253 config.go:182] Loaded profile config "missing-upgrade-552528": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1009 18:28:18.076927  339253 machine.go:91] provisioned docker machine in 940.778013ms
	I1009 18:28:18.076941  339253 client.go:171] LocalClient.Create took 8.127687308s
	I1009 18:28:18.076963  339253 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-552528" took 8.127755607s
	I1009 18:28:18.076971  339253 start.go:300] post-start starting for "missing-upgrade-552528" (driver="docker")
	I1009 18:28:18.076983  339253 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:28:18.077037  339253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:28:18.077076  339253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-552528
	I1009 18:28:18.098690  339253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/missing-upgrade-552528/id_rsa Username:docker}
	I1009 18:28:18.200933  339253 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:28:18.208211  339253 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:28:18.208274  339253 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1009 18:28:18.208287  339253 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1009 18:28:18.208295  339253 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1009 18:28:18.208307  339253 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-140450/.minikube/addons for local assets ...
	I1009 18:28:18.208359  339253 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-140450/.minikube/files for local assets ...
	I1009 18:28:18.208452  339253 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem -> 1440942.pem in /etc/ssl/certs
	I1009 18:28:18.208573  339253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:28:18.221419  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem --> /etc/ssl/certs/1440942.pem (1708 bytes)
	I1009 18:28:18.255930  339253 start.go:303] post-start completed in 178.941529ms
	I1009 18:28:18.256376  339253 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-552528
	I1009 18:28:18.277959  339253 profile.go:148] Saving config to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/config.json ...
	I1009 18:28:18.278231  339253 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:28:18.278274  339253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-552528
	I1009 18:28:18.306233  339253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/missing-upgrade-552528/id_rsa Username:docker}
	I1009 18:28:18.399271  339253 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:28:18.404324  339253 start.go:128] duration metric: createHost completed in 8.457005474s
	I1009 18:28:18.404341  339253 start.go:83] releasing machines lock for "missing-upgrade-552528", held for 8.457174693s
	I1009 18:28:18.404414  339253 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-552528
	I1009 18:28:18.423842  339253 ssh_runner.go:195] Run: cat /version.json
	I1009 18:28:18.423887  339253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-552528
	I1009 18:28:18.423933  339253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:28:18.424026  339253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-552528
	I1009 18:28:18.446958  339253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/missing-upgrade-552528/id_rsa Username:docker}
	I1009 18:28:18.447052  339253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/missing-upgrade-552528/id_rsa Username:docker}
	I1009 18:28:18.529188  339253 ssh_runner.go:195] Run: systemctl --version
	I1009 18:28:18.637230  339253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 18:28:18.641989  339253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1009 18:28:18.815377  339253 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1009 18:28:18.815448  339253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:28:18.853647  339253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1009 18:28:18.853684  339253 start.go:472] detecting cgroup driver to use...
	I1009 18:28:18.853718  339253 detect.go:199] detected "systemd" cgroup driver on host os
	I1009 18:28:18.853785  339253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1009 18:28:18.872870  339253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 18:28:18.887142  339253 docker.go:203] disabling cri-docker service (if available) ...
	I1009 18:28:18.887204  339253 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:28:18.904521  339253 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:28:18.921520  339253 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:28:19.023398  339253 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:28:19.133892  339253 docker.go:219] disabling docker service ...
	I1009 18:28:19.133948  339253 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:28:19.169407  339253 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:28:19.190259  339253 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:28:19.352185  339253 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:28:19.444244  339253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:28:19.457167  339253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:28:19.504204  339253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1009 18:28:19.520641  339253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1009 18:28:19.535387  339253 containerd.go:145] configuring containerd to use "systemd" as cgroup driver...
	I1009 18:28:19.535802  339253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1009 18:28:19.548763  339253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 18:28:19.565538  339253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1009 18:28:19.581582  339253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 18:28:19.595714  339253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:28:19.609902  339253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1009 18:28:19.626938  339253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:28:19.639900  339253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:28:19.651665  339253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:28:19.746786  339253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1009 18:28:19.874775  339253 start.go:519] Will wait 60s for socket path /run/containerd/containerd.sock
	I1009 18:28:19.874841  339253 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1009 18:28:19.879005  339253 start.go:540] Will wait 60s for crictl version
	I1009 18:28:19.879062  339253 ssh_runner.go:195] Run: which crictl
	I1009 18:28:19.883704  339253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 18:28:19.933908  339253 start.go:556] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.24
	RuntimeApiVersion:  v1
	I1009 18:28:19.933968  339253 ssh_runner.go:195] Run: containerd --version
	I1009 18:28:19.966282  339253 ssh_runner.go:195] Run: containerd --version
	I1009 18:28:19.996662  339253 out.go:177] * Preparing Kubernetes v1.28.3 on containerd 1.6.24 ...
	I1009 18:28:19.193624  340368 docker.go:234] disabling docker service ...
	I1009 18:28:19.193693  340368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:28:19.219795  340368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:28:19.232974  340368 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:28:19.383716  340368 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:28:19.500527  340368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:28:19.516170  340368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:28:19.535756  340368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1009 18:28:19.548903  340368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1009 18:28:19.563764  340368 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1009 18:28:19.563975  340368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1009 18:28:19.577922  340368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 18:28:19.590166  340368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1009 18:28:19.603844  340368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 18:28:19.623992  340368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:28:19.636756  340368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1009 18:28:19.650319  340368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1009 18:28:19.663214  340368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1009 18:28:19.674868  340368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:28:19.687790  340368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:28:19.698847  340368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:28:19.812050  340368 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1009 18:28:19.961538  340368 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1009 18:28:19.961615  340368 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1009 18:28:19.966070  340368 start.go:563] Will wait 60s for crictl version
	I1009 18:28:19.966165  340368 ssh_runner.go:195] Run: which crictl
	I1009 18:28:19.969830  340368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:28:19.999908  340368 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1009 18:28:19.999976  340368 ssh_runner.go:195] Run: containerd --version
	I1009 18:28:20.027187  340368 ssh_runner.go:195] Run: containerd --version
	I1009 18:28:20.057286  340368 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 1.7.28 ...
	I1009 18:28:19.997722  339253 cli_runner.go:164] Run: docker network inspect missing-upgrade-552528 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:28:20.017269  339253 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1009 18:28:20.022206  339253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:28:20.037097  339253 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1009 18:28:20.037207  339253 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:28:20.083340  339253 containerd.go:604] all images are preloaded for containerd runtime.
	I1009 18:28:20.083358  339253 containerd.go:518] Images already preloaded, skipping extraction
	I1009 18:28:20.083420  339253 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:28:20.128842  339253 containerd.go:604] all images are preloaded for containerd runtime.
	I1009 18:28:20.128859  339253 cache_images.go:84] Images are preloaded, skipping loading
	I1009 18:28:20.128977  339253 ssh_runner.go:195] Run: sudo crictl info
	I1009 18:28:20.176831  339253 cni.go:84] Creating CNI manager for ""
	I1009 18:28:20.176847  339253 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1009 18:28:20.176869  339253 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1009 18:28:20.176893  339253 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:missing-upgrade-552528 NodeName:missing-upgrade-552528 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:28:20.177044  339253 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "missing-upgrade-552528"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:28:20.177115  339253 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=missing-upgrade-552528 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-552528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1009 18:28:20.177205  339253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1009 18:28:20.188681  339253 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 18:28:20.188747  339253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:28:20.200700  339253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (394 bytes)
	I1009 18:28:20.224492  339253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:28:20.252570  339253 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2110 bytes)
	I1009 18:28:20.271733  339253 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:28:20.275440  339253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:28:20.290905  339253 certs.go:56] Setting up /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528 for IP: 192.168.85.2
	I1009 18:28:20.290945  339253 certs.go:190] acquiring lock for shared ca certs: {Name:mk886b151c2ee368fca29ea3aee2e1e334a9b55c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:20.291111  339253 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/21139-140450/.minikube/ca.key
	I1009 18:28:20.291199  339253 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/21139-140450/.minikube/proxy-client-ca.key
	I1009 18:28:20.291264  339253 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/client.key
	I1009 18:28:20.291277  339253 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/client.crt with IP's: []
	I1009 18:28:20.424937  339253 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/client.crt ...
	I1009 18:28:20.424959  339253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/client.crt: {Name:mkac1afd46e37e3192bd9830ca72e83c71456d12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:20.425737  339253 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/client.key ...
	I1009 18:28:20.425757  339253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/client.key: {Name:mk2f1850851a68f2659a31764dea2e5186332a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:20.425885  339253 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.key.43b9df8c
	I1009 18:28:20.425900  339253 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1009 18:28:20.591557  339253 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.crt.43b9df8c ...
	I1009 18:28:20.591573  339253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.crt.43b9df8c: {Name:mk1080cbb499e6ea09361c5ef9375416d8697855 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:20.591732  339253 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.key.43b9df8c ...
	I1009 18:28:20.591747  339253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.key.43b9df8c: {Name:mk0f49a3cebc2f447128cff50f901e314db15662 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:20.591834  339253 certs.go:337] copying /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.crt
	I1009 18:28:20.591915  339253 certs.go:341] copying /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.key
	I1009 18:28:20.591964  339253 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/proxy-client.key
	I1009 18:28:20.591985  339253 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/proxy-client.crt with IP's: []
	I1009 18:28:20.681848  339253 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/proxy-client.crt ...
	I1009 18:28:20.681867  339253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/proxy-client.crt: {Name:mkd6cc8ccbcd5ece84a6d7dac9a794c13b6bbbdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:20.682016  339253 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/proxy-client.key ...
	I1009 18:28:20.682026  339253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/proxy-client.key: {Name:mk985d8b7652d49467a2dfb445023f26dd32f6a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:20.682235  339253 certs.go:437] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/home/jenkins/minikube-integration/21139-140450/.minikube/certs/144094.pem (1338 bytes)
	W1009 18:28:20.682268  339253 certs.go:433] ignoring /home/jenkins/minikube-integration/21139-140450/.minikube/certs/home/jenkins/minikube-integration/21139-140450/.minikube/certs/144094_empty.pem, impossibly tiny 0 bytes
	I1009 18:28:20.682277  339253 certs.go:437] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:28:20.682300  339253 certs.go:437] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem (1078 bytes)
	I1009 18:28:20.682318  339253 certs.go:437] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:28:20.682339  339253 certs.go:437] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/home/jenkins/minikube-integration/21139-140450/.minikube/certs/key.pem (1675 bytes)
	I1009 18:28:20.682375  339253 certs.go:437] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem (1708 bytes)
	I1009 18:28:20.683044  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1009 18:28:20.714357  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 18:28:20.740068  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:28:20.765538  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 18:28:20.792319  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:28:20.818472  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 18:28:20.844500  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:28:20.869960  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:28:20.898672  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/certs/144094.pem --> /usr/share/ca-certificates/144094.pem (1338 bytes)
	I1009 18:28:20.924817  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem --> /usr/share/ca-certificates/1440942.pem (1708 bytes)
	I1009 18:28:20.947899  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:28:20.971622  339253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:28:20.989435  339253 ssh_runner.go:195] Run: openssl version
	I1009 18:28:20.994828  339253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144094.pem && ln -fs /usr/share/ca-certificates/144094.pem /etc/ssl/certs/144094.pem"
	I1009 18:28:21.004352  339253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144094.pem
	I1009 18:28:21.008464  339253 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:03 /usr/share/ca-certificates/144094.pem
	I1009 18:28:21.008529  339253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144094.pem
	I1009 18:28:21.016134  339253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144094.pem /etc/ssl/certs/51391683.0"
	I1009 18:28:21.025629  339253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1440942.pem && ln -fs /usr/share/ca-certificates/1440942.pem /etc/ssl/certs/1440942.pem"
	I1009 18:28:21.035088  339253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1440942.pem
	I1009 18:28:21.038671  339253 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:03 /usr/share/ca-certificates/1440942.pem
	I1009 18:28:21.038712  339253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1440942.pem
	I1009 18:28:21.045547  339253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1440942.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:28:21.056313  339253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:28:21.067199  339253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:21.071651  339253 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:21.071696  339253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:21.081360  339253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:28:21.091536  339253 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1009 18:28:21.095183  339253 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1009 18:28:21.095232  339253 kubeadm.go:404] StartCluster: {Name:missing-upgrade-552528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-552528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 18:28:21.095313  339253 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1009 18:28:21.095364  339253 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:28:21.131311  339253 cri.go:89] found id: ""
	I1009 18:28:21.131382  339253 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:28:21.140764  339253 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:28:21.149608  339253 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:28:21.149658  339253 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:28:21.158720  339253 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:28:21.158758  339253 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:28:21.211447  339253 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1009 18:28:21.211518  339253 kubeadm.go:322] [preflight] Running pre-flight checks
	I1009 18:28:21.264078  339253 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:28:21.264200  339253 kubeadm.go:322] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:28:21.264260  339253 kubeadm.go:322] OS: Linux
	I1009 18:28:21.264320  339253 kubeadm.go:322] CGROUPS_CPU: enabled
	I1009 18:28:21.264384  339253 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1009 18:28:21.264422  339253 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1009 18:28:21.264481  339253 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1009 18:28:21.264538  339253 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1009 18:28:21.264595  339253 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1009 18:28:21.264665  339253 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1009 18:28:21.264721  339253 kubeadm.go:322] CGROUPS_IO: enabled
	I1009 18:28:21.332326  339253 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:28:21.332465  339253 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:28:21.332603  339253 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 18:28:21.571408  339253 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:28:17.573013  341627 cli_runner.go:217] Completed: docker run --rm --name NoKubernetes-847951-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-847951 --entrypoint /usr/bin/test -v NoKubernetes-847951:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (4.508533478s)
	I1009 18:28:17.573050  341627 oci.go:107] Successfully prepared a docker volume NoKubernetes-847951
	I1009 18:28:17.573137  341627 preload.go:178] Skipping preload logic due to --no-kubernetes flag
	W1009 18:28:17.573233  341627 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:28:17.573289  341627 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:28:17.573343  341627 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:28:17.640826  341627 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname NoKubernetes-847951 --name NoKubernetes-847951 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-847951 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=NoKubernetes-847951 --network NoKubernetes-847951 --ip 192.168.94.2 --volume NoKubernetes-847951:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 18:28:18.660343  341627 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname NoKubernetes-847951 --name NoKubernetes-847951 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-847951 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=NoKubernetes-847951 --network NoKubernetes-847951 --ip 192.168.94.2 --volume NoKubernetes-847951:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92: (1.019447593s)
	I1009 18:28:18.660453  341627 cli_runner.go:164] Run: docker container inspect NoKubernetes-847951 --format={{.State.Running}}
	I1009 18:28:18.676818  341627 cli_runner.go:164] Run: docker container inspect NoKubernetes-847951 --format={{.State.Status}}
	I1009 18:28:18.693147  341627 cli_runner.go:164] Run: docker exec NoKubernetes-847951 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:28:18.807862  341627 oci.go:144] the created container "NoKubernetes-847951" has a running status.
	I1009 18:28:18.807895  341627 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-140450/.minikube/machines/NoKubernetes-847951/id_rsa...
	I1009 18:28:19.422927  341627 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-140450/.minikube/machines/NoKubernetes-847951/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 18:28:19.423066  341627 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-140450/.minikube/machines/NoKubernetes-847951/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:28:19.515239  341627 cli_runner.go:164] Run: docker container inspect NoKubernetes-847951 --format={{.State.Status}}
	I1009 18:28:19.536831  341627 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:28:19.536852  341627 kic_runner.go:114] Args: [docker exec --privileged NoKubernetes-847951 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:28:19.594887  341627 cli_runner.go:164] Run: docker container inspect NoKubernetes-847951 --format={{.State.Status}}
	I1009 18:28:19.617912  341627 machine.go:93] provisionDockerMachine start ...
	I1009 18:28:19.618242  341627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-847951
	I1009 18:28:19.642805  341627 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:19.643169  341627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1009 18:28:19.643189  341627 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:28:19.808934  341627 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-847951
	
	I1009 18:28:19.808970  341627 ubuntu.go:182] provisioning hostname "NoKubernetes-847951"
	I1009 18:28:19.809041  341627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-847951
	I1009 18:28:19.830861  341627 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:19.831072  341627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1009 18:28:19.831085  341627 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-847951 && echo "NoKubernetes-847951" | sudo tee /etc/hostname
	I1009 18:28:20.012662  341627 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-847951
	
	I1009 18:28:20.012761  341627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-847951
	I1009 18:28:20.034570  341627 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:20.034861  341627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1009 18:28:20.034883  341627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-847951' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-847951/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-847951' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:28:20.198504  341627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:28:20.198533  341627 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-140450/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-140450/.minikube}
	I1009 18:28:20.198564  341627 ubuntu.go:190] setting up certificates
	I1009 18:28:20.198586  341627 provision.go:84] configureAuth start
	I1009 18:28:20.198648  341627 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-847951
	I1009 18:28:20.221266  341627 provision.go:143] copyHostCerts
	I1009 18:28:20.221306  341627 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem
	I1009 18:28:20.221346  341627 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem, removing ...
	I1009 18:28:20.221355  341627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem
	I1009 18:28:20.221429  341627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem (1675 bytes)
	I1009 18:28:20.221575  341627 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem
	I1009 18:28:20.221602  341627 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem, removing ...
	I1009 18:28:20.221608  341627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem
	I1009 18:28:20.221654  341627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem (1078 bytes)
	I1009 18:28:20.221731  341627 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem
	I1009 18:28:20.221753  341627 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem, removing ...
	I1009 18:28:20.221760  341627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem
	I1009 18:28:20.221798  341627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem (1123 bytes)
	I1009 18:28:20.221871  341627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-847951 san=[127.0.0.1 192.168.94.2 NoKubernetes-847951 localhost minikube]
	I1009 18:28:20.688322  341627 provision.go:177] copyRemoteCerts
	I1009 18:28:20.688392  341627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:28:20.688447  341627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-847951
	I1009 18:28:20.710940  341627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/NoKubernetes-847951/id_rsa Username:docker}
	I1009 18:28:20.816976  341627 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:28:20.817044  341627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:28:20.840783  341627 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:28:20.840871  341627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1009 18:28:20.859439  341627 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:28:20.859499  341627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 18:28:20.880473  341627 provision.go:87] duration metric: took 681.873246ms to configureAuth
	I1009 18:28:20.880499  341627 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:28:20.880664  341627 config.go:182] Loaded profile config "NoKubernetes-847951": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1009 18:28:20.880685  341627 machine.go:96] duration metric: took 1.262586136s to provisionDockerMachine
	I1009 18:28:20.880695  341627 client.go:171] duration metric: took 8.582142105s to LocalClient.Create
	I1009 18:28:20.880721  341627 start.go:167] duration metric: took 8.58222532s to libmachine.API.Create "NoKubernetes-847951"
	I1009 18:28:20.880734  341627 start.go:293] postStartSetup for "NoKubernetes-847951" (driver="docker")
	I1009 18:28:20.880745  341627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:28:20.880804  341627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:28:20.880851  341627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-847951
	I1009 18:28:20.901440  341627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/NoKubernetes-847951/id_rsa Username:docker}
	I1009 18:28:21.008517  341627 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:28:21.012379  341627 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:28:21.012418  341627 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:28:21.012430  341627 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-140450/.minikube/addons for local assets ...
	I1009 18:28:21.012482  341627 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-140450/.minikube/files for local assets ...
	I1009 18:28:21.012584  341627 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem -> 1440942.pem in /etc/ssl/certs
	I1009 18:28:21.012602  341627 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem -> /etc/ssl/certs/1440942.pem
	I1009 18:28:21.012713  341627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:28:21.020288  341627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem --> /etc/ssl/certs/1440942.pem (1708 bytes)
	I1009 18:28:21.039942  341627 start.go:296] duration metric: took 159.193551ms for postStartSetup
	I1009 18:28:21.040334  341627 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-847951
	I1009 18:28:21.059971  341627 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/NoKubernetes-847951/config.json ...
	I1009 18:28:21.060305  341627 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:28:21.060357  341627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-847951
	I1009 18:28:21.081806  341627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/NoKubernetes-847951/id_rsa Username:docker}
	I1009 18:28:21.183613  341627 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:28:21.188431  341627 start.go:128] duration metric: took 8.893524264s to createHost
	I1009 18:28:21.188456  341627 start.go:83] releasing machines lock for "NoKubernetes-847951", held for 8.893677897s
	I1009 18:28:21.188534  341627 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-847951
	I1009 18:28:21.210070  341627 ssh_runner.go:195] Run: cat /version.json
	I1009 18:28:21.210107  341627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:28:21.210145  341627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-847951
	I1009 18:28:21.210189  341627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-847951
	I1009 18:28:21.229828  341627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/NoKubernetes-847951/id_rsa Username:docker}
	I1009 18:28:21.232476  341627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/NoKubernetes-847951/id_rsa Username:docker}
	I1009 18:28:21.339151  341627 ssh_runner.go:195] Run: systemctl --version
	I1009 18:28:21.406743  341627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:28:21.412428  341627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:28:21.412501  341627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:28:21.437569  341627 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:28:21.437596  341627 start.go:495] detecting cgroup driver to use...
	I1009 18:28:21.437630  341627 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:28:21.437683  341627 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1009 18:28:21.453198  341627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 18:28:21.466012  341627 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:28:21.466069  341627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:28:21.485872  341627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:28:21.507609  341627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:28:21.599504  341627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:28:21.701314  341627 docker.go:234] disabling docker service ...
	I1009 18:28:21.701372  341627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:28:21.722496  341627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:28:21.736943  341627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:28:21.836040  341627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:28:21.927552  341627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:28:21.940929  341627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:28:21.958518  341627 binary.go:59] Skipping Kubernetes binary download due to --no-kubernetes flag
	I1009 18:28:21.958615  341627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1009 18:28:21.973917  341627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1009 18:28:21.985308  341627 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1009 18:28:21.985381  341627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1009 18:28:21.996356  341627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 18:28:22.006661  341627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1009 18:28:19.673531  335435 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-140450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v stopped-upgrade-729726:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (6.519774325s)
	I1009 18:28:19.673559  335435 kic.go:203] duration metric: took 6.519915 seconds to extract preloaded images to volume
	W1009 18:28:19.673644  335435 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:28:19.673686  335435 oci.go:243] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:28:19.673744  335435 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:28:19.757030  335435 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-729726 --name stopped-upgrade-729726 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-729726 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-729726 --network stopped-upgrade-729726 --ip 192.168.103.2 --volume stopped-upgrade-729726:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1009 18:28:20.075190  335435 cli_runner.go:164] Run: docker container inspect stopped-upgrade-729726 --format={{.State.Running}}
	I1009 18:28:20.094791  335435 cli_runner.go:164] Run: docker container inspect stopped-upgrade-729726 --format={{.State.Status}}
	I1009 18:28:20.117486  335435 cli_runner.go:164] Run: docker exec stopped-upgrade-729726 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:28:20.171945  335435 oci.go:144] the created container "stopped-upgrade-729726" has a running status.
	I1009 18:28:20.171981  335435 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-140450/.minikube/machines/stopped-upgrade-729726/id_rsa...
	I1009 18:28:20.303522  335435 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-140450/.minikube/machines/stopped-upgrade-729726/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:28:20.338238  335435 cli_runner.go:164] Run: docker container inspect stopped-upgrade-729726 --format={{.State.Status}}
	I1009 18:28:20.359086  335435 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:28:20.359126  335435 kic_runner.go:114] Args: [docker exec --privileged stopped-upgrade-729726 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:28:20.427504  335435 cli_runner.go:164] Run: docker container inspect stopped-upgrade-729726 --format={{.State.Status}}
	I1009 18:28:20.452422  335435 machine.go:88] provisioning docker machine ...
	I1009 18:28:20.452475  335435 ubuntu.go:169] provisioning hostname "stopped-upgrade-729726"
	I1009 18:28:20.453230  335435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-729726
	I1009 18:28:20.482791  335435 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:20.483407  335435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1009 18:28:20.483425  335435 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-729726 && echo "stopped-upgrade-729726" | sudo tee /etc/hostname
	I1009 18:28:20.627495  335435 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-729726
	
	I1009 18:28:20.627580  335435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-729726
	I1009 18:28:20.650100  335435 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:20.650631  335435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1009 18:28:20.650652  335435 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-729726' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-729726/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-729726' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:28:20.772682  335435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:28:20.772707  335435 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21139-140450/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-140450/.minikube}
	I1009 18:28:20.772729  335435 ubuntu.go:177] setting up certificates
	I1009 18:28:20.772742  335435 provision.go:83] configureAuth start
	I1009 18:28:20.772805  335435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-729726
	I1009 18:28:20.791620  335435 provision.go:138] copyHostCerts
	I1009 18:28:20.791670  335435 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem, removing ...
	I1009 18:28:20.791676  335435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem
	I1009 18:28:20.791730  335435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem (1078 bytes)
	I1009 18:28:20.791808  335435 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem, removing ...
	I1009 18:28:20.791811  335435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem
	I1009 18:28:20.791834  335435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem (1123 bytes)
	I1009 18:28:20.791884  335435 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem, removing ...
	I1009 18:28:20.791887  335435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem
	I1009 18:28:20.791907  335435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem (1675 bytes)
	I1009 18:28:20.791948  335435 provision.go:112] generating server cert: /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-729726 san=[192.168.103.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-729726]
	I1009 18:28:20.863434  335435 provision.go:172] copyRemoteCerts
	I1009 18:28:20.863497  335435 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:28:20.863532  335435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-729726
	I1009 18:28:20.883497  335435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/stopped-upgrade-729726/id_rsa Username:docker}
	I1009 18:28:20.971098  335435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:28:20.995802  335435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1009 18:28:21.021297  335435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 18:28:21.046980  335435 provision.go:86] duration metric: configureAuth took 274.223291ms
	I1009 18:28:21.047006  335435 ubuntu.go:193] setting minikube options for container-runtime
	I1009 18:28:21.047226  335435 config.go:182] Loaded profile config "stopped-upgrade-729726": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1009 18:28:21.047248  335435 machine.go:91] provisioned docker machine in 594.797596ms
	I1009 18:28:21.047256  335435 client.go:171] LocalClient.Create took 10.90361302s
	I1009 18:28:21.047281  335435 start.go:167] duration metric: libmachine.API.Create for "stopped-upgrade-729726" took 10.903673823s
	I1009 18:28:21.047290  335435 start.go:300] post-start starting for "stopped-upgrade-729726" (driver="docker")
	I1009 18:28:21.047303  335435 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:28:21.047358  335435 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:28:21.047393  335435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-729726
	I1009 18:28:21.067022  335435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/stopped-upgrade-729726/id_rsa Username:docker}
	I1009 18:28:21.157789  335435 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:28:21.161163  335435 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:28:21.161198  335435 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1009 18:28:21.161211  335435 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1009 18:28:21.161218  335435 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1009 18:28:21.161230  335435 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-140450/.minikube/addons for local assets ...
	I1009 18:28:21.161292  335435 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-140450/.minikube/files for local assets ...
	I1009 18:28:21.161386  335435 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem -> 1440942.pem in /etc/ssl/certs
	I1009 18:28:21.161525  335435 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:28:21.169756  335435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem --> /etc/ssl/certs/1440942.pem (1708 bytes)
	I1009 18:28:21.197617  335435 start.go:303] post-start completed in 150.314441ms
	I1009 18:28:21.197997  335435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-729726
	I1009 18:28:21.218345  335435 profile.go:148] Saving config to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/stopped-upgrade-729726/config.json ...
	I1009 18:28:21.218638  335435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:28:21.218680  335435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-729726
	I1009 18:28:21.239053  335435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/stopped-upgrade-729726/id_rsa Username:docker}
	I1009 18:28:21.327875  335435 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:28:21.332485  335435 start.go:128] duration metric: createHost completed in 11.190379079s
	I1009 18:28:21.332501  335435 start.go:83] releasing machines lock for "stopped-upgrade-729726", held for 11.190529177s
	I1009 18:28:21.332574  335435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-729726
	I1009 18:28:21.354546  335435 ssh_runner.go:195] Run: cat /version.json
	I1009 18:28:21.354567  335435 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:28:21.354593  335435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-729726
	I1009 18:28:21.354639  335435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-729726
	I1009 18:28:21.376375  335435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/stopped-upgrade-729726/id_rsa Username:docker}
	I1009 18:28:21.376840  335435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/stopped-upgrade-729726/id_rsa Username:docker}
	I1009 18:28:21.570253  335435 ssh_runner.go:195] Run: systemctl --version
	I1009 18:28:21.576022  335435 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 18:28:21.580553  335435 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1009 18:28:21.610907  335435 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1009 18:28:21.610980  335435 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:28:21.648932  335435 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1009 18:28:21.648951  335435 start.go:472] detecting cgroup driver to use...
	I1009 18:28:21.648980  335435 detect.go:199] detected "systemd" cgroup driver on host os
	I1009 18:28:21.649031  335435 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1009 18:28:21.664113  335435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 18:28:21.676614  335435 docker.go:203] disabling cri-docker service (if available) ...
	I1009 18:28:21.676665  335435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:28:21.692807  335435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:28:21.706960  335435 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:28:21.797361  335435 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:28:21.890247  335435 docker.go:219] disabling docker service ...
	I1009 18:28:21.890309  335435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:28:21.910938  335435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:28:21.923600  335435 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:28:22.014142  335435 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:28:22.112640  335435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:28:22.125462  335435 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:28:22.142895  335435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1009 18:28:22.159060  335435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1009 18:28:22.169807  335435 containerd.go:145] configuring containerd to use "systemd" as cgroup driver...
	I1009 18:28:22.169865  335435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1009 18:28:22.180428  335435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 18:28:22.191179  335435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1009 18:28:22.201401  335435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 18:28:22.212648  335435 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:28:22.222803  335435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1009 18:28:22.233651  335435 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:28:22.243076  335435 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:28:22.253816  335435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:28:22.016264  341627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 18:28:22.025798  341627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:28:22.034867  341627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1009 18:28:22.044087  341627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:28:22.055363  341627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:28:22.062843  341627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:28:22.152470  341627 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1009 18:28:22.253368  341627 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1009 18:28:22.253443  341627 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1009 18:28:22.257875  341627 start.go:563] Will wait 60s for crictl version
	I1009 18:28:22.257930  341627 ssh_runner.go:195] Run: which crictl
	I1009 18:28:22.262658  341627 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:28:22.295061  341627 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1009 18:28:22.295174  341627 ssh_runner.go:195] Run: containerd --version
	I1009 18:28:22.322191  341627 ssh_runner.go:195] Run: containerd --version
	I1009 18:28:22.349785  341627 out.go:179] * Preparing containerd 1.7.28 ...
	I1009 18:28:22.351108  341627 ssh_runner.go:195] Run: rm -f paused
	I1009 18:28:22.356194  341627 out.go:179] * Done! minikube is ready without Kubernetes!
	I1009 18:28:22.358761  341627 out.go:203] ╭──────────────────────────────────────────────────────────╮
	│                                                          │
	│          * Things to try without Kubernetes ...          │
	│                                                          │
	│    - "minikube ssh" to SSH into minikube's node.         │
	│    - "minikube image" to build images without docker.    │
	│                                                          │
	╰──────────────────────────────────────────────────────────╯
	I1009 18:28:20.058401  340368 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-701596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:28:20.079994  340368 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 18:28:20.084756  340368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:28:20.098572  340368 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-701596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-701596 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:28:20.098727  340368 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1009 18:28:20.098797  340368 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:28:20.129790  340368 containerd.go:627] all images are preloaded for containerd runtime.
	I1009 18:28:20.129817  340368 containerd.go:534] Images already preloaded, skipping extraction
	I1009 18:28:20.129883  340368 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:28:20.161987  340368 containerd.go:627] all images are preloaded for containerd runtime.
	I1009 18:28:20.162007  340368 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:28:20.162016  340368 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 containerd true true} ...
	I1009 18:28:20.162106  340368 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-701596 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-701596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:28:20.162176  340368 ssh_runner.go:195] Run: sudo crictl info
	I1009 18:28:20.197595  340368 cni.go:84] Creating CNI manager for ""
	I1009 18:28:20.197625  340368 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1009 18:28:20.197717  340368 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:28:20.197782  340368 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-701596 NodeName:kubernetes-upgrade-701596 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:28:20.197994  340368 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-701596"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:28:20.198077  340368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1009 18:28:20.209070  340368 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:28:20.209159  340368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:28:20.218347  340368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1009 18:28:20.238709  340368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:28:20.254343  340368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2178 bytes)
	I1009 18:28:20.270354  340368 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:28:20.274555  340368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:28:20.288353  340368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:28:20.411510  340368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:28:20.439054  340368 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596 for IP: 192.168.76.2
	I1009 18:28:20.439079  340368 certs.go:195] generating shared ca certs ...
	I1009 18:28:20.439102  340368 certs.go:227] acquiring lock for ca certs: {Name:mk886b151c2ee368fca29ea3aee2e1e334a9b55c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:20.439638  340368 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-140450/.minikube/ca.key
	I1009 18:28:20.439777  340368 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-140450/.minikube/proxy-client-ca.key
	I1009 18:28:20.439824  340368 certs.go:257] generating profile certs ...
	I1009 18:28:20.439940  340368 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/client.key
	I1009 18:28:20.440011  340368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/client.crt with IP's: []
	I1009 18:28:20.918609  340368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/client.crt ...
	I1009 18:28:20.918646  340368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/client.crt: {Name:mke63eff4621790fb9613d101045ddd5ef8b433f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:20.918841  340368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/client.key ...
	I1009 18:28:20.918865  340368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/client.key: {Name:mkc0b317bfe82bf79a00685c75590df3337845d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:20.918988  340368 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.key.59c826b3
	I1009 18:28:20.919011  340368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.crt.59c826b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1009 18:28:21.187031  340368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.crt.59c826b3 ...
	I1009 18:28:21.187061  340368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.crt.59c826b3: {Name:mke7e24ad9abf21bcd1fd5a13807745c3519a23e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:21.187273  340368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.key.59c826b3 ...
	I1009 18:28:21.187302  340368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.key.59c826b3: {Name:mk47f7d9887d78713120b9f3236f0f5f1523adc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:21.187433  340368 certs.go:382] copying /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.crt.59c826b3 -> /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.crt
	I1009 18:28:21.187557  340368 certs.go:386] copying /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.key.59c826b3 -> /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.key
	I1009 18:28:21.188307  340368 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/proxy-client.key
	I1009 18:28:21.188336  340368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/proxy-client.crt with IP's: []
	I1009 18:28:21.603655  340368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/proxy-client.crt ...
	I1009 18:28:21.603681  340368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/proxy-client.crt: {Name:mk91c0b64c87af88f04bf404fd81f8baa12d700a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:21.603850  340368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/proxy-client.key ...
	I1009 18:28:21.603873  340368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/proxy-client.key: {Name:mk9549ae414f5a458797b1ddd3d4310db3c43aef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:21.604103  340368 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/144094.pem (1338 bytes)
	W1009 18:28:21.604177  340368 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-140450/.minikube/certs/144094_empty.pem, impossibly tiny 0 bytes
	I1009 18:28:21.604193  340368 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:28:21.604283  340368 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem (1078 bytes)
	I1009 18:28:21.604327  340368 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:28:21.604360  340368 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/key.pem (1675 bytes)
	I1009 18:28:21.604413  340368 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem (1708 bytes)
	I1009 18:28:21.605176  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:28:21.622646  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 18:28:21.650427  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:28:21.670709  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:28:21.689457  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1009 18:28:21.707517  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 18:28:21.727603  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:28:21.754253  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 18:28:21.774032  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/certs/144094.pem --> /usr/share/ca-certificates/144094.pem (1338 bytes)
	I1009 18:28:21.798273  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem --> /usr/share/ca-certificates/1440942.pem (1708 bytes)
	I1009 18:28:21.816038  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:28:21.839017  340368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:28:21.851618  340368 ssh_runner.go:195] Run: openssl version
	I1009 18:28:21.858281  340368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144094.pem && ln -fs /usr/share/ca-certificates/144094.pem /etc/ssl/certs/144094.pem"
	I1009 18:28:21.868896  340368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144094.pem
	I1009 18:28:21.878621  340368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:03 /usr/share/ca-certificates/144094.pem
	I1009 18:28:21.878700  340368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144094.pem
	I1009 18:28:21.915844  340368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144094.pem /etc/ssl/certs/51391683.0"
	I1009 18:28:21.925107  340368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1440942.pem && ln -fs /usr/share/ca-certificates/1440942.pem /etc/ssl/certs/1440942.pem"
	I1009 18:28:21.933728  340368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1440942.pem
	I1009 18:28:21.937809  340368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:03 /usr/share/ca-certificates/1440942.pem
	I1009 18:28:21.937863  340368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1440942.pem
	I1009 18:28:21.982716  340368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1440942.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:28:21.992499  340368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:28:22.002919  340368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:22.007490  340368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:22.007541  340368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:22.062433  340368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:28:22.070875  340368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:28:22.074358  340368 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:28:22.074423  340368 kubeadm.go:400] StartCluster: {Name:kubernetes-upgrade-701596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-701596 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:28:22.074505  340368 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1009 18:28:22.074550  340368 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:28:22.110154  340368 cri.go:89] found id: ""
	I1009 18:28:22.110231  340368 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:28:22.118813  340368 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:28:22.126745  340368 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:28:22.126802  340368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:28:22.134400  340368 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:28:22.134422  340368 kubeadm.go:157] found existing configuration files:
	
	I1009 18:28:22.134463  340368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:28:22.141808  340368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:28:22.141858  340368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:28:22.148897  340368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:28:22.156863  340368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:28:22.156916  340368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:28:22.164762  340368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:28:22.173014  340368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:28:22.173075  340368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:28:22.180346  340368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:28:22.187939  340368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:28:22.187997  340368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:28:22.195683  340368 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:28:22.244219  340368 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1009 18:28:22.244299  340368 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:28:22.287063  340368 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:28:22.287194  340368 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:28:22.287233  340368 kubeadm.go:318] OS: Linux
	I1009 18:28:22.287268  340368 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:28:22.287321  340368 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:28:22.287384  340368 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:28:22.287449  340368 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:28:22.287556  340368 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:28:22.287645  340368 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:28:22.287710  340368 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:28:22.287799  340368 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:28:22.376864  340368 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:28:22.377002  340368 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:28:22.377258  340368 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 18:28:22.547559  340368 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:28:22.324919  335435 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1009 18:28:22.447664  335435 start.go:519] Will wait 60s for socket path /run/containerd/containerd.sock
	I1009 18:28:22.447746  335435 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1009 18:28:22.452401  335435 start.go:540] Will wait 60s for crictl version
	I1009 18:28:22.452459  335435 ssh_runner.go:195] Run: which crictl
	I1009 18:28:22.457435  335435 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 18:28:22.506985  335435 start.go:556] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.24
	RuntimeApiVersion:  v1
	I1009 18:28:22.507084  335435 ssh_runner.go:195] Run: containerd --version
	I1009 18:28:22.536243  335435 ssh_runner.go:195] Run: containerd --version
	I1009 18:28:22.567259  335435 out.go:177] * Preparing Kubernetes v1.28.3 on containerd 1.6.24 ...
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247167716Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247231782Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247253194Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247265141Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247281744Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247294672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247310309Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247323429Z" level=info msg="NRI interface is disabled by configuration."
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247337048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247679655Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRunti
meSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath:/etc/containerd/certs.d Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.9 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingH
ugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247746746Z" level=info msg="Connect containerd service"
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247795876Z" level=info msg="using legacy CRI server"
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247807961Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247953040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.248732612Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.248907763Z" level=info msg="Start subscribing containerd event"
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.249253109Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.249475409Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.249583087Z" level=info msg="Start recovering state"
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.250277259Z" level=info msg="Start event monitor"
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.250378167Z" level=info msg="Start snapshots syncer"
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.250406899Z" level=info msg="Start cni network conf syncer for default"
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.250428939Z" level=info msg="Start streaming server"
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.250583911Z" level=info msg="containerd successfully booted in 0.041875s"
	Oct 09 18:28:22 NoKubernetes-847951 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	sudo: /var/lib/minikube/binaries/v0.0.0/kubectl: command not found
	
	
	==> dmesg <==
	[Oct 9 17:17] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001883] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.081021] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.375327] i8042: Warning: Keylock active
	[  +0.011676] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003214] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000906] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000935] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.001129] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000664] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000730] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000835] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.448086] block sda: the capability attribute has been deprecated.
	[  +0.076799] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.019944] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.638606] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:28:23 up  1:10,  0 user,  load average: 6.82, 2.65, 10.81
	Linux NoKubernetes-847951 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p NoKubernetes-847951 -n NoKubernetes-847951
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p NoKubernetes-847951 -n NoKubernetes-847951: exit status 6 (337.68658ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:28:23.739753  347914 status.go:458] kubeconfig endpoint: get endpoint: "NoKubernetes-847951" does not appear in /home/jenkins/minikube-integration/21139-140450/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "NoKubernetes-847951" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect NoKubernetes-847951
helpers_test.go:243: (dbg) docker inspect NoKubernetes-847951:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "82b5b957bef9b0708792a124c7c9d08e8c1230220c43a406c8e1f0084b34f9d0",
	        "Created": "2025-10-09T18:28:17.65935633Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 343875,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:28:18.171337709Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/82b5b957bef9b0708792a124c7c9d08e8c1230220c43a406c8e1f0084b34f9d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/82b5b957bef9b0708792a124c7c9d08e8c1230220c43a406c8e1f0084b34f9d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/82b5b957bef9b0708792a124c7c9d08e8c1230220c43a406c8e1f0084b34f9d0/hosts",
	        "LogPath": "/var/lib/docker/containers/82b5b957bef9b0708792a124c7c9d08e8c1230220c43a406c8e1f0084b34f9d0/82b5b957bef9b0708792a124c7c9d08e8c1230220c43a406c8e1f0084b34f9d0-json.log",
	        "Name": "/NoKubernetes-847951",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "NoKubernetes-847951:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "NoKubernetes-847951",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "82b5b957bef9b0708792a124c7c9d08e8c1230220c43a406c8e1f0084b34f9d0",
	                "LowerDir": "/var/lib/docker/overlay2/ddb12026e9d292bd26e86ab0c9a1b530ad5c970a0ef39f3cd628266b2f8241f6-init/diff:/var/lib/docker/overlay2/2a598a362d6b1138dfd456c417c26d95545a2673435fc2114840f46031e2745b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ddb12026e9d292bd26e86ab0c9a1b530ad5c970a0ef39f3cd628266b2f8241f6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ddb12026e9d292bd26e86ab0c9a1b530ad5c970a0ef39f3cd628266b2f8241f6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ddb12026e9d292bd26e86ab0c9a1b530ad5c970a0ef39f3cd628266b2f8241f6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "NoKubernetes-847951",
	                "Source": "/var/lib/docker/volumes/NoKubernetes-847951/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "NoKubernetes-847951",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "NoKubernetes-847951",
	                "name.minikube.sigs.k8s.io": "NoKubernetes-847951",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "33870d0494c79d469c1251407f7661d1aff6a44e03f0a2348fa81b7bfd8b6fb1",
	            "SandboxKey": "/var/run/docker/netns/33870d0494c7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33003"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33004"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33007"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33005"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33006"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "NoKubernetes-847951": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:7a:1c:d4:a3:28",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c1b75b78bdfe92b60c75667900dc5720a91a1e6067b3792d7ee6eb086c6c84bc",
	                    "EndpointID": "b4afb3b50e57da612f1c0ed851978f88071ab6d0e65d8fcf32c0367b951ae3eb",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "NoKubernetes-847951",
	                        "82b5b957bef9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-847951 -n NoKubernetes-847951
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-847951 -n NoKubernetes-847951: exit status 6 (327.079503ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:28:24.089747  348108 status.go:458] kubeconfig endpoint: get endpoint: "NoKubernetes-847951" does not appear in /home/jenkins/minikube-integration/21139-140450/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-847951 logs -n 25
helpers_test.go:260: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-265552 sudo docker system info                                                                                                       │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo systemctl status cri-docker --all --full --no-pager                                                                      │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo systemctl cat cri-docker --no-pager                                                                                      │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                 │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                           │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo cri-dockerd --version                                                                                                    │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo systemctl status containerd --all --full --no-pager                                                                      │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo systemctl cat containerd --no-pager                                                                                      │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo cat /lib/systemd/system/containerd.service                                                                               │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo cat /etc/containerd/config.toml                                                                                          │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo containerd config dump                                                                                                   │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo systemctl status crio --all --full --no-pager                                                                            │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo systemctl cat crio --no-pager                                                                                            │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                  │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ -p cilium-265552 sudo crio config                                                                                                              │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ delete  │ -p cilium-265552                                                                                                                               │ cilium-265552             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │ 09 Oct 25 18:27 UTC │
	│ start   │ -p stopped-upgrade-729726 --memory=3072 --vm-driver=docker  --container-runtime=containerd                                                     │ stopped-upgrade-729726    │ jenkins │ v1.32.0 │ 09 Oct 25 18:27 UTC │                     │
	│ ssh     │ force-systemd-env-855890 ssh cat /etc/containerd/config.toml                                                                                   │ force-systemd-env-855890  │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │ 09 Oct 25 18:27 UTC │
	│ delete  │ -p force-systemd-env-855890                                                                                                                    │ force-systemd-env-855890  │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │ 09 Oct 25 18:27 UTC │
	│ start   │ -p NoKubernetes-847951 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                    │ NoKubernetes-847951       │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │ 09 Oct 25 18:28 UTC │
	│ start   │ -p missing-upgrade-552528 --memory=3072 --driver=docker  --container-runtime=containerd                                                        │ missing-upgrade-552528    │ jenkins │ v1.32.0 │ 09 Oct 25 18:27 UTC │                     │
	│ delete  │ -p offline-containerd-818450                                                                                                                   │ offline-containerd-818450 │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │ 09 Oct 25 18:28 UTC │
	│ start   │ -p kubernetes-upgrade-701596 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd │ kubernetes-upgrade-701596 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	│ delete  │ -p NoKubernetes-847951                                                                                                                         │ NoKubernetes-847951       │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ start   │ -p NoKubernetes-847951 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                    │ NoKubernetes-847951       │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:28:12
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:28:12.007319  341627 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:28:12.007625  341627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:28:12.007635  341627 out.go:374] Setting ErrFile to fd 2...
	I1009 18:28:12.007640  341627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:28:12.007913  341627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-140450/.minikube/bin
	I1009 18:28:12.008605  341627 out.go:368] Setting JSON to false
	I1009 18:28:12.009908  341627 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4232,"bootTime":1760030260,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:28:12.010018  341627 start.go:141] virtualization: kvm guest
	I1009 18:28:12.059871  341627 out.go:179] * [NoKubernetes-847951] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:28:12.061144  341627 notify.go:220] Checking for updates...
	I1009 18:28:12.061167  341627 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:28:12.062567  341627 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:28:12.064662  341627 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-140450/kubeconfig
	I1009 18:28:12.066106  341627 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-140450/.minikube
	I1009 18:28:12.070400  341627 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:28:12.071967  341627 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:28:12.073926  341627 config.go:182] Loaded profile config "kubernetes-upgrade-701596": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1009 18:28:12.074073  341627 config.go:182] Loaded profile config "missing-upgrade-552528": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1009 18:28:12.074212  341627 config.go:182] Loaded profile config "stopped-upgrade-729726": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1009 18:28:12.074247  341627 start.go:1899] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1009 18:28:12.074352  341627 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:28:12.101670  341627 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:28:12.101790  341627 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:28:12.176010  341627 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:57 OomKillDisable:false NGoroutines:99 SystemTime:2025-10-09 18:28:12.163396808 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:28:12.176211  341627 docker.go:318] overlay module found
	I1009 18:28:12.178500  341627 out.go:179] * Using the docker driver based on user configuration
	I1009 18:28:12.179539  341627 start.go:305] selected driver: docker
	I1009 18:28:12.179560  341627 start.go:925] validating driver "docker" against <nil>
	I1009 18:28:12.179575  341627 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:28:12.180357  341627 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:28:12.262551  341627 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-09 18:28:12.250822574 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:28:12.262694  341627 start.go:1899] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1009 18:28:12.262790  341627 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:28:12.263108  341627 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 18:28:12.264875  341627 out.go:179] * Using Docker driver with root privileges
	I1009 18:28:12.265922  341627 cni.go:84] Creating CNI manager for ""
	I1009 18:28:12.266018  341627 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1009 18:28:12.266033  341627 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:28:12.266066  341627 start.go:1899] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1009 18:28:12.266137  341627 start.go:349] cluster config:
	{Name:NoKubernetes-847951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-847951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:28:12.267282  341627 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-847951
	I1009 18:28:12.268317  341627 cache.go:133] Beginning downloading kic base image for docker with containerd
	I1009 18:28:12.269391  341627 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:28:12.270449  341627 cache.go:58] Skipping Kubernetes image caching due to --no-kubernetes flag
	I1009 18:28:12.270546  341627 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:28:12.270691  341627 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/NoKubernetes-847951/config.json ...
	I1009 18:28:12.270729  341627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/NoKubernetes-847951/config.json: {Name:mk8acae52a86147cd8ec6a24f9ad8611a87d36b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:12.294609  341627 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:28:12.294643  341627 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:28:12.294665  341627 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:28:12.294698  341627 start.go:360] acquireMachinesLock for NoKubernetes-847951: {Name:mkf32ae34eb47bcc7ba08a99cd03ce047ae6cf03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:28:12.294764  341627 start.go:364] duration metric: took 42.969µs to acquireMachinesLock for "NoKubernetes-847951"
	I1009 18:28:12.294789  341627 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-847951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-847951 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1009 18:28:12.294891  341627 start.go:125] createHost starting for "" (driver="docker")
	I1009 18:28:10.143298  335435 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 18:28:10.143610  335435 start.go:159] libmachine.API.Create for "stopped-upgrade-729726" (driver="docker")
	I1009 18:28:10.143638  335435 client.go:168] LocalClient.Create starting
	I1009 18:28:10.143710  335435 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem
	I1009 18:28:10.143747  335435 main.go:141] libmachine: Decoding PEM data...
	I1009 18:28:10.143771  335435 main.go:141] libmachine: Parsing certificate...
	I1009 18:28:10.143847  335435 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem
	I1009 18:28:10.143868  335435 main.go:141] libmachine: Decoding PEM data...
	I1009 18:28:10.143878  335435 main.go:141] libmachine: Parsing certificate...
	I1009 18:28:10.144381  335435 cli_runner.go:164] Run: docker network inspect stopped-upgrade-729726 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:28:10.166341  335435 cli_runner.go:211] docker network inspect stopped-upgrade-729726 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:28:10.166411  335435 network_create.go:281] running [docker network inspect stopped-upgrade-729726] to gather additional debugging logs...
	I1009 18:28:10.166427  335435 cli_runner.go:164] Run: docker network inspect stopped-upgrade-729726
	W1009 18:28:10.186602  335435 cli_runner.go:211] docker network inspect stopped-upgrade-729726 returned with exit code 1
	I1009 18:28:10.186629  335435 network_create.go:284] error running [docker network inspect stopped-upgrade-729726]: docker network inspect stopped-upgrade-729726: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network stopped-upgrade-729726 not found
	I1009 18:28:10.186646  335435 network_create.go:286] output of [docker network inspect stopped-upgrade-729726]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network stopped-upgrade-729726 not found
	
	** /stderr **
	I1009 18:28:10.186731  335435 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:28:10.206089  335435 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a776d4a7d86a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:a7:10:79:cc:07} reservation:<nil>}
	I1009 18:28:10.206981  335435 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-98ca10e9ecda IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:3b:88:20:02:72} reservation:<nil>}
	I1009 18:28:10.207801  335435 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a2287629eec3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:e5:92:f7:19:89} reservation:<nil>}
	I1009 18:28:10.208723  335435 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fd1a93c0c2b4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:e7:b5:31:47:49} reservation:<nil>}
	I1009 18:28:10.209420  335435 network.go:214] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-59758b7aed05 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:be:a1:3f:01:8e:55} reservation:<nil>}
	I1009 18:28:10.210324  335435 network.go:214] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-8f4ead0b5675 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:02:86:ee:81:3a:32} reservation:<nil>}
	I1009 18:28:10.211555  335435 network.go:209] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0027c9220}
	I1009 18:28:10.211577  335435 network_create.go:124] attempt to create docker network stopped-upgrade-729726 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1009 18:28:10.211641  335435 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=stopped-upgrade-729726 stopped-upgrade-729726
	I1009 18:28:10.694736  335435 network_create.go:108] docker network stopped-upgrade-729726 192.168.103.0/24 created
	I1009 18:28:10.694768  335435 kic.go:121] calculated static IP "192.168.103.2" for the "stopped-upgrade-729726" container
	I1009 18:28:10.694847  335435 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:28:10.716248  335435 cli_runner.go:164] Run: docker volume create stopped-upgrade-729726 --label name.minikube.sigs.k8s.io=stopped-upgrade-729726 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:28:10.784219  335435 oci.go:103] Successfully created a docker volume stopped-upgrade-729726
	I1009 18:28:10.784318  335435 cli_runner.go:164] Run: docker run --rm --name stopped-upgrade-729726-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-729726 --entrypoint /usr/bin/test -v stopped-upgrade-729726:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1009 18:28:09.319166  340368 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 18:28:09.319470  340368 start.go:159] libmachine.API.Create for "kubernetes-upgrade-701596" (driver="docker")
	I1009 18:28:09.319505  340368 client.go:168] LocalClient.Create starting
	I1009 18:28:09.319583  340368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem
	I1009 18:28:09.319616  340368 main.go:141] libmachine: Decoding PEM data...
	I1009 18:28:09.319638  340368 main.go:141] libmachine: Parsing certificate...
	I1009 18:28:09.319714  340368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem
	I1009 18:28:09.319755  340368 main.go:141] libmachine: Decoding PEM data...
	I1009 18:28:09.319780  340368 main.go:141] libmachine: Parsing certificate...
	I1009 18:28:09.320263  340368 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-701596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:28:09.337936  340368 cli_runner.go:211] docker network inspect kubernetes-upgrade-701596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:28:09.338003  340368 network_create.go:284] running [docker network inspect kubernetes-upgrade-701596] to gather additional debugging logs...
	I1009 18:28:09.338024  340368 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-701596
	W1009 18:28:09.354735  340368 cli_runner.go:211] docker network inspect kubernetes-upgrade-701596 returned with exit code 1
	I1009 18:28:09.354769  340368 network_create.go:287] error running [docker network inspect kubernetes-upgrade-701596]: docker network inspect kubernetes-upgrade-701596: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-701596 not found
	I1009 18:28:09.354790  340368 network_create.go:289] output of [docker network inspect kubernetes-upgrade-701596]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-701596 not found
	
	** /stderr **
	I1009 18:28:09.354880  340368 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:28:09.371446  340368 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a776d4a7d86a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:a7:10:79:cc:07} reservation:<nil>}
	I1009 18:28:09.371780  340368 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-98ca10e9ecda IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:3b:88:20:02:72} reservation:<nil>}
	I1009 18:28:09.372102  340368 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a2287629eec3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:e5:92:f7:19:89} reservation:<nil>}
	I1009 18:28:09.372592  340368 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d2eb80}
	I1009 18:28:09.372620  340368 network_create.go:124] attempt to create docker network kubernetes-upgrade-701596 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1009 18:28:09.372663  340368 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-701596 kubernetes-upgrade-701596
	I1009 18:28:09.428682  340368 network_create.go:108] docker network kubernetes-upgrade-701596 192.168.76.0/24 created
	I1009 18:28:09.428715  340368 kic.go:121] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-701596" container
	I1009 18:28:09.428798  340368 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:28:09.445416  340368 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-701596 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-701596 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:28:09.462547  340368 oci.go:103] Successfully created a docker volume kubernetes-upgrade-701596
	I1009 18:28:09.462619  340368 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-701596-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-701596 --entrypoint /usr/bin/test -v kubernetes-upgrade-701596:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 18:28:09.899857  340368 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-701596
	I1009 18:28:09.899931  340368 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1009 18:28:09.899942  340368 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:28:09.900002  340368 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-140450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-701596:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:28:09.948913  339253 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 18:28:09.949211  339253 start.go:159] libmachine.API.Create for "missing-upgrade-552528" (driver="docker")
	I1009 18:28:09.949242  339253 client.go:168] LocalClient.Create starting
	I1009 18:28:09.949308  339253 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem
	I1009 18:28:09.949346  339253 main.go:141] libmachine: Decoding PEM data...
	I1009 18:28:09.949364  339253 main.go:141] libmachine: Parsing certificate...
	I1009 18:28:09.949425  339253 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem
	I1009 18:28:09.949448  339253 main.go:141] libmachine: Decoding PEM data...
	I1009 18:28:09.949459  339253 main.go:141] libmachine: Parsing certificate...
	I1009 18:28:09.949856  339253 cli_runner.go:164] Run: docker network inspect missing-upgrade-552528 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:28:09.969391  339253 cli_runner.go:211] docker network inspect missing-upgrade-552528 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:28:09.969453  339253 network_create.go:281] running [docker network inspect missing-upgrade-552528] to gather additional debugging logs...
	I1009 18:28:09.969472  339253 cli_runner.go:164] Run: docker network inspect missing-upgrade-552528
	W1009 18:28:09.990468  339253 cli_runner.go:211] docker network inspect missing-upgrade-552528 returned with exit code 1
	I1009 18:28:09.990495  339253 network_create.go:284] error running [docker network inspect missing-upgrade-552528]: docker network inspect missing-upgrade-552528: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-552528 not found
	I1009 18:28:09.990511  339253 network_create.go:286] output of [docker network inspect missing-upgrade-552528]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-552528 not found
	
	** /stderr **
	I1009 18:28:09.990640  339253 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:28:10.010592  339253 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a776d4a7d86a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:a7:10:79:cc:07} reservation:<nil>}
	I1009 18:28:10.012318  339253 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-98ca10e9ecda IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:3b:88:20:02:72} reservation:<nil>}
	I1009 18:28:10.013513  339253 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a2287629eec3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:e5:92:f7:19:89} reservation:<nil>}
	I1009 18:28:10.014369  339253 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fd1a93c0c2b4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:e7:b5:31:47:49} reservation:<nil>}
	I1009 18:28:10.015263  339253 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024b27b0}
	I1009 18:28:10.015287  339253 network_create.go:124] attempt to create docker network missing-upgrade-552528 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1009 18:28:10.015345  339253 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-552528 missing-upgrade-552528
	I1009 18:28:10.086849  339253 network_create.go:108] docker network missing-upgrade-552528 192.168.85.0/24 created
	I1009 18:28:10.086880  339253 kic.go:121] calculated static IP "192.168.85.2" for the "missing-upgrade-552528" container
	I1009 18:28:10.086961  339253 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:28:10.108801  339253 cli_runner.go:164] Run: docker volume create missing-upgrade-552528 --label name.minikube.sigs.k8s.io=missing-upgrade-552528 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:28:10.138143  339253 oci.go:103] Successfully created a docker volume missing-upgrade-552528
	I1009 18:28:10.138220  339253 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-552528-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-552528 --entrypoint /usr/bin/test -v missing-upgrade-552528:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1009 18:28:11.853928  339253 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-552528-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-552528 --entrypoint /usr/bin/test -v missing-upgrade-552528:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: (1.715664139s)
	I1009 18:28:11.853952  339253 oci.go:107] Successfully prepared a docker volume missing-upgrade-552528
	I1009 18:28:11.853979  339253 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1009 18:28:11.854004  339253 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:28:11.854084  339253 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-140450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-552528:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:28:12.298230  341627 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 18:28:12.298499  341627 start.go:159] libmachine.API.Create for "NoKubernetes-847951" (driver="docker")
	I1009 18:28:12.298541  341627 client.go:168] LocalClient.Create starting
	I1009 18:28:12.298650  341627 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem
	I1009 18:28:12.298705  341627 main.go:141] libmachine: Decoding PEM data...
	I1009 18:28:12.298729  341627 main.go:141] libmachine: Parsing certificate...
	I1009 18:28:12.298809  341627 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem
	I1009 18:28:12.298848  341627 main.go:141] libmachine: Decoding PEM data...
	I1009 18:28:12.298887  341627 main.go:141] libmachine: Parsing certificate...
	I1009 18:28:12.299343  341627 cli_runner.go:164] Run: docker network inspect NoKubernetes-847951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:28:12.321799  341627 cli_runner.go:211] docker network inspect NoKubernetes-847951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:28:12.321914  341627 network_create.go:284] running [docker network inspect NoKubernetes-847951] to gather additional debugging logs...
	I1009 18:28:12.321953  341627 cli_runner.go:164] Run: docker network inspect NoKubernetes-847951
	W1009 18:28:12.345187  341627 cli_runner.go:211] docker network inspect NoKubernetes-847951 returned with exit code 1
	I1009 18:28:12.345245  341627 network_create.go:287] error running [docker network inspect NoKubernetes-847951]: docker network inspect NoKubernetes-847951: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network NoKubernetes-847951 not found
	I1009 18:28:12.345267  341627 network_create.go:289] output of [docker network inspect NoKubernetes-847951]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network NoKubernetes-847951 not found
	
	** /stderr **
	I1009 18:28:12.345423  341627 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:28:12.368564  341627 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a776d4a7d86a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:a7:10:79:cc:07} reservation:<nil>}
	I1009 18:28:12.369169  341627 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-98ca10e9ecda IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:3b:88:20:02:72} reservation:<nil>}
	I1009 18:28:12.369762  341627 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a2287629eec3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:e5:92:f7:19:89} reservation:<nil>}
	I1009 18:28:12.370399  341627 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fd1a93c0c2b4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:e7:b5:31:47:49} reservation:<nil>}
	I1009 18:28:12.370768  341627 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-59758b7aed05 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:be:a1:3f:01:8e:55} reservation:<nil>}
	I1009 18:28:12.371459  341627 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e080e0}
	I1009 18:28:12.371493  341627 network_create.go:124] attempt to create docker network NoKubernetes-847951 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1009 18:28:12.371548  341627 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-847951 NoKubernetes-847951
	I1009 18:28:12.814873  341627 network_create.go:108] docker network NoKubernetes-847951 192.168.94.0/24 created
	I1009 18:28:12.814915  341627 kic.go:121] calculated static IP "192.168.94.2" for the "NoKubernetes-847951" container
	I1009 18:28:12.814988  341627 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:28:12.837062  341627 cli_runner.go:164] Run: docker volume create NoKubernetes-847951 --label name.minikube.sigs.k8s.io=NoKubernetes-847951 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:28:13.064284  341627 oci.go:103] Successfully created a docker volume NoKubernetes-847951
	I1009 18:28:13.064423  341627 cli_runner.go:164] Run: docker run --rm --name NoKubernetes-847951-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-847951 --entrypoint /usr/bin/test -v NoKubernetes-847951:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 18:28:13.153579  335435 cli_runner.go:217] Completed: docker run --rm --name stopped-upgrade-729726-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-729726 --entrypoint /usr/bin/test -v stopped-upgrade-729726:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: (2.369215652s)
	I1009 18:28:13.153603  335435 oci.go:107] Successfully prepared a docker volume stopped-upgrade-729726
	I1009 18:28:13.153620  335435 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1009 18:28:13.153641  335435 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:28:13.153701  335435 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-140450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v stopped-upgrade-729726:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:28:15.825157  340368 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-140450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-701596:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (5.925079745s)
	I1009 18:28:15.825210  340368 kic.go:203] duration metric: took 5.925262454s to extract preloaded images to volume ...
	W1009 18:28:15.825320  340368 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:28:15.825363  340368 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:28:15.825412  340368 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:28:15.918621  340368 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-701596 --name kubernetes-upgrade-701596 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-701596 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-701596 --network kubernetes-upgrade-701596 --ip 192.168.76.2 --volume kubernetes-upgrade-701596:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 18:28:16.368638  340368 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-701596 --format={{.State.Running}}
	I1009 18:28:16.388560  340368 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-701596 --format={{.State.Status}}
	I1009 18:28:16.413494  340368 cli_runner.go:164] Run: docker exec kubernetes-upgrade-701596 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:28:16.492158  340368 oci.go:144] the created container "kubernetes-upgrade-701596" has a running status.
	I1009 18:28:16.492202  340368 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-140450/.minikube/machines/kubernetes-upgrade-701596/id_rsa...
	I1009 18:28:16.846798  340368 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-140450/.minikube/machines/kubernetes-upgrade-701596/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:28:16.993230  340368 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-701596 --format={{.State.Status}}
	I1009 18:28:17.022782  340368 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:28:17.022814  340368 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-701596 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:28:17.115017  340368 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-701596 --format={{.State.Status}}
	I1009 18:28:17.135631  340368 machine.go:93] provisionDockerMachine start ...
	I1009 18:28:17.135743  340368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-701596
	I1009 18:28:17.159004  340368 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:17.171868  340368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1009 18:28:17.171903  340368 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:28:17.319000  340368 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-701596
	
	I1009 18:28:17.319033  340368 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-701596"
	I1009 18:28:17.319088  340368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-701596
	I1009 18:28:17.336415  340368 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:17.336747  340368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1009 18:28:17.336774  340368 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-701596 && echo "kubernetes-upgrade-701596" | sudo tee /etc/hostname
	I1009 18:28:17.497508  340368 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-701596
	
	I1009 18:28:17.497620  340368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-701596
	I1009 18:28:17.516135  340368 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:17.516467  340368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1009 18:28:17.516516  340368 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-701596' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-701596/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-701596' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:28:17.672832  340368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:28:17.672892  340368 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-140450/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-140450/.minikube}
	I1009 18:28:17.672930  340368 ubuntu.go:190] setting up certificates
	I1009 18:28:17.672945  340368 provision.go:84] configureAuth start
	I1009 18:28:17.673019  340368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-701596
	I1009 18:28:17.690984  340368 provision.go:143] copyHostCerts
	I1009 18:28:17.691042  340368 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem, removing ...
	I1009 18:28:17.691053  340368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem
	I1009 18:28:17.692170  340368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem (1675 bytes)
	I1009 18:28:17.692289  340368 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem, removing ...
	I1009 18:28:17.692300  340368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem
	I1009 18:28:17.692333  340368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem (1078 bytes)
	I1009 18:28:17.692415  340368 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem, removing ...
	I1009 18:28:17.692424  340368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem
	I1009 18:28:17.692455  340368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem (1123 bytes)
	I1009 18:28:17.692558  340368 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-701596 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-701596 localhost minikube]
	I1009 18:28:17.852316  340368 provision.go:177] copyRemoteCerts
	I1009 18:28:17.852392  340368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:28:17.852429  340368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-701596
	I1009 18:28:17.872228  340368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/kubernetes-upgrade-701596/id_rsa Username:docker}
	I1009 18:28:17.975081  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:28:18.096505  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1009 18:28:18.124391  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 18:28:18.149407  340368 provision.go:87] duration metric: took 476.437203ms to configureAuth
	I1009 18:28:18.149456  340368 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:28:18.149653  340368 config.go:182] Loaded profile config "kubernetes-upgrade-701596": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1009 18:28:18.149668  340368 machine.go:96] duration metric: took 1.014011919s to provisionDockerMachine
	I1009 18:28:18.149677  340368 client.go:171] duration metric: took 8.830161985s to LocalClient.Create
	I1009 18:28:18.149701  340368 start.go:167] duration metric: took 8.830233842s to libmachine.API.Create "kubernetes-upgrade-701596"
	I1009 18:28:18.149711  340368 start.go:293] postStartSetup for "kubernetes-upgrade-701596" (driver="docker")
	I1009 18:28:18.149723  340368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:28:18.149783  340368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:28:18.149829  340368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-701596
	I1009 18:28:18.184179  340368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/kubernetes-upgrade-701596/id_rsa Username:docker}
	I1009 18:28:18.310490  340368 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:28:18.315729  340368 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:28:18.315769  340368 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:28:18.315785  340368 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-140450/.minikube/addons for local assets ...
	I1009 18:28:18.315848  340368 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-140450/.minikube/files for local assets ...
	I1009 18:28:18.315952  340368 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem -> 1440942.pem in /etc/ssl/certs
	I1009 18:28:18.316089  340368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:28:18.333291  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem --> /etc/ssl/certs/1440942.pem (1708 bytes)
	I1009 18:28:18.367762  340368 start.go:296] duration metric: took 218.032903ms for postStartSetup
	I1009 18:28:18.376358  340368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-701596
	I1009 18:28:18.396542  340368 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/config.json ...
	I1009 18:28:18.424372  340368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:28:18.424431  340368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-701596
	I1009 18:28:18.447483  340368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/kubernetes-upgrade-701596/id_rsa Username:docker}
	I1009 18:28:18.553160  340368 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:28:18.558274  340368 start.go:128] duration metric: took 9.240574027s to createHost
	I1009 18:28:18.558297  340368 start.go:83] releasing machines lock for "kubernetes-upgrade-701596", held for 9.240709036s
	I1009 18:28:18.558374  340368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-701596
	I1009 18:28:18.576922  340368 ssh_runner.go:195] Run: cat /version.json
	I1009 18:28:18.576975  340368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:28:18.576987  340368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-701596
	I1009 18:28:18.577034  340368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-701596
	I1009 18:28:18.597580  340368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/kubernetes-upgrade-701596/id_rsa Username:docker}
	I1009 18:28:18.598975  340368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/kubernetes-upgrade-701596/id_rsa Username:docker}
	I1009 18:28:18.755610  340368 ssh_runner.go:195] Run: systemctl --version
	I1009 18:28:18.762180  340368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:28:18.786991  340368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:28:18.787073  340368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:28:18.828909  340368 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:28:18.828938  340368 start.go:495] detecting cgroup driver to use...
	I1009 18:28:18.828974  340368 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:28:18.829023  340368 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1009 18:28:18.851221  340368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 18:28:18.867806  340368 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:28:18.867867  340368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:28:18.893768  340368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:28:18.919847  340368 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:28:19.050477  340368 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:28:15.826500  339253 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-140450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-552528:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.972347642s)
	I1009 18:28:15.826549  339253 kic.go:203] duration metric: took 3.972541 seconds to extract preloaded images to volume
	W1009 18:28:15.826663  339253 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:28:15.826710  339253 oci.go:243] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:28:15.826757  339253 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:28:15.907780  339253 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-552528 --name missing-upgrade-552528 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-552528 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-552528 --network missing-upgrade-552528 --ip 192.168.85.2 --volume missing-upgrade-552528:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1009 18:28:16.261334  339253 cli_runner.go:164] Run: docker container inspect missing-upgrade-552528 --format={{.State.Running}}
	I1009 18:28:16.281588  339253 cli_runner.go:164] Run: docker container inspect missing-upgrade-552528 --format={{.State.Status}}
	I1009 18:28:16.301053  339253 cli_runner.go:164] Run: docker exec missing-upgrade-552528 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:28:16.345491  339253 oci.go:144] the created container "missing-upgrade-552528" has a running status.
	I1009 18:28:16.345534  339253 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-140450/.minikube/machines/missing-upgrade-552528/id_rsa...
	I1009 18:28:16.757480  339253 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-140450/.minikube/machines/missing-upgrade-552528/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:28:16.920931  339253 cli_runner.go:164] Run: docker container inspect missing-upgrade-552528 --format={{.State.Status}}
	I1009 18:28:16.941167  339253 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:28:16.941185  339253 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-552528 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:28:17.115036  339253 cli_runner.go:164] Run: docker container inspect missing-upgrade-552528 --format={{.State.Status}}
	I1009 18:28:17.136135  339253 machine.go:88] provisioning docker machine ...
	I1009 18:28:17.136176  339253 ubuntu.go:169] provisioning hostname "missing-upgrade-552528"
	I1009 18:28:17.136243  339253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-552528
	I1009 18:28:17.157035  339253 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:17.157628  339253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1009 18:28:17.157647  339253 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-552528 && echo "missing-upgrade-552528" | sudo tee /etc/hostname
	I1009 18:28:17.344875  339253 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-552528
	
	I1009 18:28:17.344957  339253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-552528
	I1009 18:28:17.365340  339253 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:17.365708  339253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1009 18:28:17.365729  339253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-552528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-552528/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-552528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:28:17.482876  339253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:28:17.482899  339253 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21139-140450/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-140450/.minikube}
	I1009 18:28:17.482943  339253 ubuntu.go:177] setting up certificates
	I1009 18:28:17.482963  339253 provision.go:83] configureAuth start
	I1009 18:28:17.483027  339253 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-552528
	I1009 18:28:17.504232  339253 provision.go:138] copyHostCerts
	I1009 18:28:17.504287  339253 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem, removing ...
	I1009 18:28:17.504293  339253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem
	I1009 18:28:17.507217  339253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem (1078 bytes)
	I1009 18:28:17.507365  339253 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem, removing ...
	I1009 18:28:17.507374  339253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem
	I1009 18:28:17.507423  339253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem (1123 bytes)
	I1009 18:28:17.507577  339253 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem, removing ...
	I1009 18:28:17.507584  339253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem
	I1009 18:28:17.507630  339253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem (1675 bytes)
	I1009 18:28:17.507709  339253 provision.go:112] generating server cert: /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-552528 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-552528]
	I1009 18:28:17.611411  339253 provision.go:172] copyRemoteCerts
	I1009 18:28:17.611469  339253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:28:17.611538  339253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-552528
	I1009 18:28:17.633641  339253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/missing-upgrade-552528/id_rsa Username:docker}
	I1009 18:28:17.724262  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 18:28:17.891979  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:28:17.942021  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1009 18:28:18.076680  339253 provision.go:86] duration metric: configureAuth took 593.701508ms
	I1009 18:28:18.076698  339253 ubuntu.go:193] setting minikube options for container-runtime
	I1009 18:28:18.076917  339253 config.go:182] Loaded profile config "missing-upgrade-552528": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1009 18:28:18.076927  339253 machine.go:91] provisioned docker machine in 940.778013ms
	I1009 18:28:18.076941  339253 client.go:171] LocalClient.Create took 8.127687308s
	I1009 18:28:18.076963  339253 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-552528" took 8.127755607s
	I1009 18:28:18.076971  339253 start.go:300] post-start starting for "missing-upgrade-552528" (driver="docker")
	I1009 18:28:18.076983  339253 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:28:18.077037  339253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:28:18.077076  339253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-552528
	I1009 18:28:18.098690  339253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/missing-upgrade-552528/id_rsa Username:docker}
	I1009 18:28:18.200933  339253 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:28:18.208211  339253 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:28:18.208274  339253 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1009 18:28:18.208287  339253 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1009 18:28:18.208295  339253 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1009 18:28:18.208307  339253 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-140450/.minikube/addons for local assets ...
	I1009 18:28:18.208359  339253 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-140450/.minikube/files for local assets ...
	I1009 18:28:18.208452  339253 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem -> 1440942.pem in /etc/ssl/certs
	I1009 18:28:18.208573  339253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:28:18.221419  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem --> /etc/ssl/certs/1440942.pem (1708 bytes)
	I1009 18:28:18.255930  339253 start.go:303] post-start completed in 178.941529ms
	I1009 18:28:18.256376  339253 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-552528
	I1009 18:28:18.277959  339253 profile.go:148] Saving config to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/config.json ...
	I1009 18:28:18.278231  339253 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:28:18.278274  339253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-552528
	I1009 18:28:18.306233  339253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/missing-upgrade-552528/id_rsa Username:docker}
	I1009 18:28:18.399271  339253 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:28:18.404324  339253 start.go:128] duration metric: createHost completed in 8.457005474s
	I1009 18:28:18.404341  339253 start.go:83] releasing machines lock for "missing-upgrade-552528", held for 8.457174693s
	I1009 18:28:18.404414  339253 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-552528
	I1009 18:28:18.423842  339253 ssh_runner.go:195] Run: cat /version.json
	I1009 18:28:18.423887  339253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-552528
	I1009 18:28:18.423933  339253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:28:18.424026  339253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-552528
	I1009 18:28:18.446958  339253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/missing-upgrade-552528/id_rsa Username:docker}
	I1009 18:28:18.447052  339253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/missing-upgrade-552528/id_rsa Username:docker}
	I1009 18:28:18.529188  339253 ssh_runner.go:195] Run: systemctl --version
	I1009 18:28:18.637230  339253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 18:28:18.641989  339253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1009 18:28:18.815377  339253 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1009 18:28:18.815448  339253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:28:18.853647  339253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1009 18:28:18.853684  339253 start.go:472] detecting cgroup driver to use...
	I1009 18:28:18.853718  339253 detect.go:199] detected "systemd" cgroup driver on host os
	I1009 18:28:18.853785  339253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1009 18:28:18.872870  339253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 18:28:18.887142  339253 docker.go:203] disabling cri-docker service (if available) ...
	I1009 18:28:18.887204  339253 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:28:18.904521  339253 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:28:18.921520  339253 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:28:19.023398  339253 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:28:19.133892  339253 docker.go:219] disabling docker service ...
	I1009 18:28:19.133948  339253 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:28:19.169407  339253 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:28:19.190259  339253 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:28:19.352185  339253 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:28:19.444244  339253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:28:19.457167  339253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:28:19.504204  339253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1009 18:28:19.520641  339253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1009 18:28:19.535387  339253 containerd.go:145] configuring containerd to use "systemd" as cgroup driver...
	I1009 18:28:19.535802  339253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1009 18:28:19.548763  339253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 18:28:19.565538  339253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1009 18:28:19.581582  339253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 18:28:19.595714  339253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:28:19.609902  339253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1009 18:28:19.626938  339253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:28:19.639900  339253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:28:19.651665  339253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:28:19.746786  339253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1009 18:28:19.874775  339253 start.go:519] Will wait 60s for socket path /run/containerd/containerd.sock
	I1009 18:28:19.874841  339253 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1009 18:28:19.879005  339253 start.go:540] Will wait 60s for crictl version
	I1009 18:28:19.879062  339253 ssh_runner.go:195] Run: which crictl
	I1009 18:28:19.883704  339253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 18:28:19.933908  339253 start.go:556] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.24
	RuntimeApiVersion:  v1
	I1009 18:28:19.933968  339253 ssh_runner.go:195] Run: containerd --version
	I1009 18:28:19.966282  339253 ssh_runner.go:195] Run: containerd --version
	I1009 18:28:19.996662  339253 out.go:177] * Preparing Kubernetes v1.28.3 on containerd 1.6.24 ...
	I1009 18:28:19.193624  340368 docker.go:234] disabling docker service ...
	I1009 18:28:19.193693  340368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:28:19.219795  340368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:28:19.232974  340368 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:28:19.383716  340368 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:28:19.500527  340368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:28:19.516170  340368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:28:19.535756  340368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1009 18:28:19.548903  340368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1009 18:28:19.563764  340368 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1009 18:28:19.563975  340368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1009 18:28:19.577922  340368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 18:28:19.590166  340368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1009 18:28:19.603844  340368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 18:28:19.623992  340368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:28:19.636756  340368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1009 18:28:19.650319  340368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1009 18:28:19.663214  340368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1009 18:28:19.674868  340368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:28:19.687790  340368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:28:19.698847  340368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:28:19.812050  340368 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1009 18:28:19.961538  340368 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1009 18:28:19.961615  340368 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1009 18:28:19.966070  340368 start.go:563] Will wait 60s for crictl version
	I1009 18:28:19.966165  340368 ssh_runner.go:195] Run: which crictl
	I1009 18:28:19.969830  340368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:28:19.999908  340368 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1009 18:28:19.999976  340368 ssh_runner.go:195] Run: containerd --version
	I1009 18:28:20.027187  340368 ssh_runner.go:195] Run: containerd --version
	I1009 18:28:20.057286  340368 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 1.7.28 ...
	I1009 18:28:19.997722  339253 cli_runner.go:164] Run: docker network inspect missing-upgrade-552528 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:28:20.017269  339253 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1009 18:28:20.022206  339253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:28:20.037097  339253 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1009 18:28:20.037207  339253 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:28:20.083340  339253 containerd.go:604] all images are preloaded for containerd runtime.
	I1009 18:28:20.083358  339253 containerd.go:518] Images already preloaded, skipping extraction
	I1009 18:28:20.083420  339253 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:28:20.128842  339253 containerd.go:604] all images are preloaded for containerd runtime.
	I1009 18:28:20.128859  339253 cache_images.go:84] Images are preloaded, skipping loading
	I1009 18:28:20.128977  339253 ssh_runner.go:195] Run: sudo crictl info
	I1009 18:28:20.176831  339253 cni.go:84] Creating CNI manager for ""
	I1009 18:28:20.176847  339253 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1009 18:28:20.176869  339253 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1009 18:28:20.176893  339253 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:missing-upgrade-552528 NodeName:missing-upgrade-552528 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:28:20.177044  339253 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "missing-upgrade-552528"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:28:20.177115  339253 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=missing-upgrade-552528 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-552528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1009 18:28:20.177205  339253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1009 18:28:20.188681  339253 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 18:28:20.188747  339253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:28:20.200700  339253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (394 bytes)
	I1009 18:28:20.224492  339253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:28:20.252570  339253 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2110 bytes)
	I1009 18:28:20.271733  339253 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:28:20.275440  339253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:28:20.290905  339253 certs.go:56] Setting up /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528 for IP: 192.168.85.2
	I1009 18:28:20.290945  339253 certs.go:190] acquiring lock for shared ca certs: {Name:mk886b151c2ee368fca29ea3aee2e1e334a9b55c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:20.291111  339253 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/21139-140450/.minikube/ca.key
	I1009 18:28:20.291199  339253 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/21139-140450/.minikube/proxy-client-ca.key
	I1009 18:28:20.291264  339253 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/client.key
	I1009 18:28:20.291277  339253 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/client.crt with IP's: []
	I1009 18:28:20.424937  339253 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/client.crt ...
	I1009 18:28:20.424959  339253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/client.crt: {Name:mkac1afd46e37e3192bd9830ca72e83c71456d12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:20.425737  339253 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/client.key ...
	I1009 18:28:20.425757  339253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/client.key: {Name:mk2f1850851a68f2659a31764dea2e5186332a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:20.425885  339253 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.key.43b9df8c
	I1009 18:28:20.425900  339253 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1009 18:28:20.591557  339253 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.crt.43b9df8c ...
	I1009 18:28:20.591573  339253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.crt.43b9df8c: {Name:mk1080cbb499e6ea09361c5ef9375416d8697855 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:20.591732  339253 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.key.43b9df8c ...
	I1009 18:28:20.591747  339253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.key.43b9df8c: {Name:mk0f49a3cebc2f447128cff50f901e314db15662 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:20.591834  339253 certs.go:337] copying /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.crt
	I1009 18:28:20.591915  339253 certs.go:341] copying /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.key
	I1009 18:28:20.591964  339253 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/proxy-client.key
	I1009 18:28:20.591985  339253 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/proxy-client.crt with IP's: []
	I1009 18:28:20.681848  339253 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/proxy-client.crt ...
	I1009 18:28:20.681867  339253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/proxy-client.crt: {Name:mkd6cc8ccbcd5ece84a6d7dac9a794c13b6bbbdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:20.682016  339253 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/proxy-client.key ...
	I1009 18:28:20.682026  339253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/proxy-client.key: {Name:mk985d8b7652d49467a2dfb445023f26dd32f6a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:20.682235  339253 certs.go:437] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/home/jenkins/minikube-integration/21139-140450/.minikube/certs/144094.pem (1338 bytes)
	W1009 18:28:20.682268  339253 certs.go:433] ignoring /home/jenkins/minikube-integration/21139-140450/.minikube/certs/home/jenkins/minikube-integration/21139-140450/.minikube/certs/144094_empty.pem, impossibly tiny 0 bytes
	I1009 18:28:20.682277  339253 certs.go:437] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:28:20.682300  339253 certs.go:437] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem (1078 bytes)
	I1009 18:28:20.682318  339253 certs.go:437] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:28:20.682339  339253 certs.go:437] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/home/jenkins/minikube-integration/21139-140450/.minikube/certs/key.pem (1675 bytes)
	I1009 18:28:20.682375  339253 certs.go:437] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem (1708 bytes)
	I1009 18:28:20.683044  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1009 18:28:20.714357  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 18:28:20.740068  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:28:20.765538  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/missing-upgrade-552528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 18:28:20.792319  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:28:20.818472  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 18:28:20.844500  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:28:20.869960  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:28:20.898672  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/certs/144094.pem --> /usr/share/ca-certificates/144094.pem (1338 bytes)
	I1009 18:28:20.924817  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem --> /usr/share/ca-certificates/1440942.pem (1708 bytes)
	I1009 18:28:20.947899  339253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:28:20.971622  339253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:28:20.989435  339253 ssh_runner.go:195] Run: openssl version
	I1009 18:28:20.994828  339253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144094.pem && ln -fs /usr/share/ca-certificates/144094.pem /etc/ssl/certs/144094.pem"
	I1009 18:28:21.004352  339253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144094.pem
	I1009 18:28:21.008464  339253 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:03 /usr/share/ca-certificates/144094.pem
	I1009 18:28:21.008529  339253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144094.pem
	I1009 18:28:21.016134  339253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144094.pem /etc/ssl/certs/51391683.0"
	I1009 18:28:21.025629  339253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1440942.pem && ln -fs /usr/share/ca-certificates/1440942.pem /etc/ssl/certs/1440942.pem"
	I1009 18:28:21.035088  339253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1440942.pem
	I1009 18:28:21.038671  339253 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:03 /usr/share/ca-certificates/1440942.pem
	I1009 18:28:21.038712  339253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1440942.pem
	I1009 18:28:21.045547  339253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1440942.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:28:21.056313  339253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:28:21.067199  339253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:21.071651  339253 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:21.071696  339253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:21.081360  339253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:28:21.091536  339253 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1009 18:28:21.095183  339253 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1009 18:28:21.095232  339253 kubeadm.go:404] StartCluster: {Name:missing-upgrade-552528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-552528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 18:28:21.095313  339253 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1009 18:28:21.095364  339253 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:28:21.131311  339253 cri.go:89] found id: ""
	I1009 18:28:21.131382  339253 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:28:21.140764  339253 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:28:21.149608  339253 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:28:21.149658  339253 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:28:21.158720  339253 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:28:21.158758  339253 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:28:21.211447  339253 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1009 18:28:21.211518  339253 kubeadm.go:322] [preflight] Running pre-flight checks
	I1009 18:28:21.264078  339253 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:28:21.264200  339253 kubeadm.go:322] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:28:21.264260  339253 kubeadm.go:322] OS: Linux
	I1009 18:28:21.264320  339253 kubeadm.go:322] CGROUPS_CPU: enabled
	I1009 18:28:21.264384  339253 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1009 18:28:21.264422  339253 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1009 18:28:21.264481  339253 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1009 18:28:21.264538  339253 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1009 18:28:21.264595  339253 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1009 18:28:21.264665  339253 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1009 18:28:21.264721  339253 kubeadm.go:322] CGROUPS_IO: enabled
	I1009 18:28:21.332326  339253 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:28:21.332465  339253 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:28:21.332603  339253 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 18:28:21.571408  339253 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:28:17.573013  341627 cli_runner.go:217] Completed: docker run --rm --name NoKubernetes-847951-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-847951 --entrypoint /usr/bin/test -v NoKubernetes-847951:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (4.508533478s)
	I1009 18:28:17.573050  341627 oci.go:107] Successfully prepared a docker volume NoKubernetes-847951
	I1009 18:28:17.573137  341627 preload.go:178] Skipping preload logic due to --no-kubernetes flag
	W1009 18:28:17.573233  341627 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:28:17.573289  341627 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:28:17.573343  341627 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:28:17.640826  341627 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname NoKubernetes-847951 --name NoKubernetes-847951 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-847951 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=NoKubernetes-847951 --network NoKubernetes-847951 --ip 192.168.94.2 --volume NoKubernetes-847951:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 18:28:18.660343  341627 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname NoKubernetes-847951 --name NoKubernetes-847951 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-847951 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=NoKubernetes-847951 --network NoKubernetes-847951 --ip 192.168.94.2 --volume NoKubernetes-847951:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92: (1.019447593s)
	I1009 18:28:18.660453  341627 cli_runner.go:164] Run: docker container inspect NoKubernetes-847951 --format={{.State.Running}}
	I1009 18:28:18.676818  341627 cli_runner.go:164] Run: docker container inspect NoKubernetes-847951 --format={{.State.Status}}
	I1009 18:28:18.693147  341627 cli_runner.go:164] Run: docker exec NoKubernetes-847951 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:28:18.807862  341627 oci.go:144] the created container "NoKubernetes-847951" has a running status.
	I1009 18:28:18.807895  341627 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-140450/.minikube/machines/NoKubernetes-847951/id_rsa...
	I1009 18:28:19.422927  341627 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-140450/.minikube/machines/NoKubernetes-847951/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 18:28:19.423066  341627 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-140450/.minikube/machines/NoKubernetes-847951/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:28:19.515239  341627 cli_runner.go:164] Run: docker container inspect NoKubernetes-847951 --format={{.State.Status}}
	I1009 18:28:19.536831  341627 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:28:19.536852  341627 kic_runner.go:114] Args: [docker exec --privileged NoKubernetes-847951 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:28:19.594887  341627 cli_runner.go:164] Run: docker container inspect NoKubernetes-847951 --format={{.State.Status}}
	I1009 18:28:19.617912  341627 machine.go:93] provisionDockerMachine start ...
	I1009 18:28:19.618242  341627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-847951
	I1009 18:28:19.642805  341627 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:19.643169  341627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1009 18:28:19.643189  341627 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:28:19.808934  341627 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-847951
	
	I1009 18:28:19.808970  341627 ubuntu.go:182] provisioning hostname "NoKubernetes-847951"
	I1009 18:28:19.809041  341627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-847951
	I1009 18:28:19.830861  341627 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:19.831072  341627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1009 18:28:19.831085  341627 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-847951 && echo "NoKubernetes-847951" | sudo tee /etc/hostname
	I1009 18:28:20.012662  341627 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-847951
	
	I1009 18:28:20.012761  341627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-847951
	I1009 18:28:20.034570  341627 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:20.034861  341627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1009 18:28:20.034883  341627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-847951' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-847951/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-847951' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:28:20.198504  341627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:28:20.198533  341627 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-140450/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-140450/.minikube}
	I1009 18:28:20.198564  341627 ubuntu.go:190] setting up certificates
	I1009 18:28:20.198586  341627 provision.go:84] configureAuth start
	I1009 18:28:20.198648  341627 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-847951
	I1009 18:28:20.221266  341627 provision.go:143] copyHostCerts
	I1009 18:28:20.221306  341627 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem
	I1009 18:28:20.221346  341627 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem, removing ...
	I1009 18:28:20.221355  341627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem
	I1009 18:28:20.221429  341627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem (1675 bytes)
	I1009 18:28:20.221575  341627 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem
	I1009 18:28:20.221602  341627 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem, removing ...
	I1009 18:28:20.221608  341627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem
	I1009 18:28:20.221654  341627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem (1078 bytes)
	I1009 18:28:20.221731  341627 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem
	I1009 18:28:20.221753  341627 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem, removing ...
	I1009 18:28:20.221760  341627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem
	I1009 18:28:20.221798  341627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem (1123 bytes)
	I1009 18:28:20.221871  341627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-847951 san=[127.0.0.1 192.168.94.2 NoKubernetes-847951 localhost minikube]
	I1009 18:28:20.688322  341627 provision.go:177] copyRemoteCerts
	I1009 18:28:20.688392  341627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:28:20.688447  341627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-847951
	I1009 18:28:20.710940  341627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/NoKubernetes-847951/id_rsa Username:docker}
	I1009 18:28:20.816976  341627 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:28:20.817044  341627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:28:20.840783  341627 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:28:20.840871  341627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1009 18:28:20.859439  341627 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:28:20.859499  341627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 18:28:20.880473  341627 provision.go:87] duration metric: took 681.873246ms to configureAuth
	I1009 18:28:20.880499  341627 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:28:20.880664  341627 config.go:182] Loaded profile config "NoKubernetes-847951": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1009 18:28:20.880685  341627 machine.go:96] duration metric: took 1.262586136s to provisionDockerMachine
	I1009 18:28:20.880695  341627 client.go:171] duration metric: took 8.582142105s to LocalClient.Create
	I1009 18:28:20.880721  341627 start.go:167] duration metric: took 8.58222532s to libmachine.API.Create "NoKubernetes-847951"
	I1009 18:28:20.880734  341627 start.go:293] postStartSetup for "NoKubernetes-847951" (driver="docker")
	I1009 18:28:20.880745  341627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:28:20.880804  341627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:28:20.880851  341627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-847951
	I1009 18:28:20.901440  341627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/NoKubernetes-847951/id_rsa Username:docker}
	I1009 18:28:21.008517  341627 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:28:21.012379  341627 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:28:21.012418  341627 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:28:21.012430  341627 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-140450/.minikube/addons for local assets ...
	I1009 18:28:21.012482  341627 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-140450/.minikube/files for local assets ...
	I1009 18:28:21.012584  341627 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem -> 1440942.pem in /etc/ssl/certs
	I1009 18:28:21.012602  341627 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem -> /etc/ssl/certs/1440942.pem
	I1009 18:28:21.012713  341627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:28:21.020288  341627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem --> /etc/ssl/certs/1440942.pem (1708 bytes)
	I1009 18:28:21.039942  341627 start.go:296] duration metric: took 159.193551ms for postStartSetup
	I1009 18:28:21.040334  341627 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-847951
	I1009 18:28:21.059971  341627 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/NoKubernetes-847951/config.json ...
	I1009 18:28:21.060305  341627 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:28:21.060357  341627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-847951
	I1009 18:28:21.081806  341627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/NoKubernetes-847951/id_rsa Username:docker}
	I1009 18:28:21.183613  341627 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:28:21.188431  341627 start.go:128] duration metric: took 8.893524264s to createHost
	I1009 18:28:21.188456  341627 start.go:83] releasing machines lock for "NoKubernetes-847951", held for 8.893677897s
	I1009 18:28:21.188534  341627 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-847951
	I1009 18:28:21.210070  341627 ssh_runner.go:195] Run: cat /version.json
	I1009 18:28:21.210107  341627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:28:21.210145  341627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-847951
	I1009 18:28:21.210189  341627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-847951
	I1009 18:28:21.229828  341627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/NoKubernetes-847951/id_rsa Username:docker}
	I1009 18:28:21.232476  341627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/NoKubernetes-847951/id_rsa Username:docker}
	I1009 18:28:21.339151  341627 ssh_runner.go:195] Run: systemctl --version
	I1009 18:28:21.406743  341627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:28:21.412428  341627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:28:21.412501  341627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:28:21.437569  341627 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:28:21.437596  341627 start.go:495] detecting cgroup driver to use...
	I1009 18:28:21.437630  341627 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:28:21.437683  341627 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1009 18:28:21.453198  341627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 18:28:21.466012  341627 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:28:21.466069  341627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:28:21.485872  341627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:28:21.507609  341627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:28:21.599504  341627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:28:21.701314  341627 docker.go:234] disabling docker service ...
	I1009 18:28:21.701372  341627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:28:21.722496  341627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:28:21.736943  341627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:28:21.836040  341627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:28:21.927552  341627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:28:21.940929  341627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:28:21.958518  341627 binary.go:59] Skipping Kubernetes binary download due to --no-kubernetes flag
	I1009 18:28:21.958615  341627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1009 18:28:21.973917  341627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1009 18:28:21.985308  341627 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1009 18:28:21.985381  341627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1009 18:28:21.996356  341627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 18:28:22.006661  341627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1009 18:28:19.673531  335435 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-140450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v stopped-upgrade-729726:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (6.519774325s)
	I1009 18:28:19.673559  335435 kic.go:203] duration metric: took 6.519915 seconds to extract preloaded images to volume
	W1009 18:28:19.673644  335435 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:28:19.673686  335435 oci.go:243] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:28:19.673744  335435 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:28:19.757030  335435 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-729726 --name stopped-upgrade-729726 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-729726 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-729726 --network stopped-upgrade-729726 --ip 192.168.103.2 --volume stopped-upgrade-729726:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1009 18:28:20.075190  335435 cli_runner.go:164] Run: docker container inspect stopped-upgrade-729726 --format={{.State.Running}}
	I1009 18:28:20.094791  335435 cli_runner.go:164] Run: docker container inspect stopped-upgrade-729726 --format={{.State.Status}}
	I1009 18:28:20.117486  335435 cli_runner.go:164] Run: docker exec stopped-upgrade-729726 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:28:20.171945  335435 oci.go:144] the created container "stopped-upgrade-729726" has a running status.
	I1009 18:28:20.171981  335435 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-140450/.minikube/machines/stopped-upgrade-729726/id_rsa...
	I1009 18:28:20.303522  335435 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-140450/.minikube/machines/stopped-upgrade-729726/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:28:20.338238  335435 cli_runner.go:164] Run: docker container inspect stopped-upgrade-729726 --format={{.State.Status}}
	I1009 18:28:20.359086  335435 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:28:20.359126  335435 kic_runner.go:114] Args: [docker exec --privileged stopped-upgrade-729726 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:28:20.427504  335435 cli_runner.go:164] Run: docker container inspect stopped-upgrade-729726 --format={{.State.Status}}
	I1009 18:28:20.452422  335435 machine.go:88] provisioning docker machine ...
	I1009 18:28:20.452475  335435 ubuntu.go:169] provisioning hostname "stopped-upgrade-729726"
	I1009 18:28:20.453230  335435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-729726
	I1009 18:28:20.482791  335435 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:20.483407  335435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1009 18:28:20.483425  335435 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-729726 && echo "stopped-upgrade-729726" | sudo tee /etc/hostname
	I1009 18:28:20.627495  335435 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-729726
	
	I1009 18:28:20.627580  335435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-729726
	I1009 18:28:20.650100  335435 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:20.650631  335435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1009 18:28:20.650652  335435 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-729726' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-729726/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-729726' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:28:20.772682  335435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:28:20.772707  335435 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21139-140450/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-140450/.minikube}
	I1009 18:28:20.772729  335435 ubuntu.go:177] setting up certificates
	I1009 18:28:20.772742  335435 provision.go:83] configureAuth start
	I1009 18:28:20.772805  335435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-729726
	I1009 18:28:20.791620  335435 provision.go:138] copyHostCerts
	I1009 18:28:20.791670  335435 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem, removing ...
	I1009 18:28:20.791676  335435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem
	I1009 18:28:20.791730  335435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/ca.pem (1078 bytes)
	I1009 18:28:20.791808  335435 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem, removing ...
	I1009 18:28:20.791811  335435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem
	I1009 18:28:20.791834  335435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/cert.pem (1123 bytes)
	I1009 18:28:20.791884  335435 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem, removing ...
	I1009 18:28:20.791887  335435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem
	I1009 18:28:20.791907  335435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-140450/.minikube/key.pem (1675 bytes)
	I1009 18:28:20.791948  335435 provision.go:112] generating server cert: /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-729726 san=[192.168.103.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-729726]
	I1009 18:28:20.863434  335435 provision.go:172] copyRemoteCerts
	I1009 18:28:20.863497  335435 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:28:20.863532  335435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-729726
	I1009 18:28:20.883497  335435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/stopped-upgrade-729726/id_rsa Username:docker}
	I1009 18:28:20.971098  335435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:28:20.995802  335435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1009 18:28:21.021297  335435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 18:28:21.046980  335435 provision.go:86] duration metric: configureAuth took 274.223291ms
	I1009 18:28:21.047006  335435 ubuntu.go:193] setting minikube options for container-runtime
	I1009 18:28:21.047226  335435 config.go:182] Loaded profile config "stopped-upgrade-729726": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1009 18:28:21.047248  335435 machine.go:91] provisioned docker machine in 594.797596ms
	I1009 18:28:21.047256  335435 client.go:171] LocalClient.Create took 10.90361302s
	I1009 18:28:21.047281  335435 start.go:167] duration metric: libmachine.API.Create for "stopped-upgrade-729726" took 10.903673823s
	I1009 18:28:21.047290  335435 start.go:300] post-start starting for "stopped-upgrade-729726" (driver="docker")
	I1009 18:28:21.047303  335435 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:28:21.047358  335435 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:28:21.047393  335435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-729726
	I1009 18:28:21.067022  335435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/stopped-upgrade-729726/id_rsa Username:docker}
	I1009 18:28:21.157789  335435 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:28:21.161163  335435 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:28:21.161198  335435 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1009 18:28:21.161211  335435 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1009 18:28:21.161218  335435 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1009 18:28:21.161230  335435 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-140450/.minikube/addons for local assets ...
	I1009 18:28:21.161292  335435 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-140450/.minikube/files for local assets ...
	I1009 18:28:21.161386  335435 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem -> 1440942.pem in /etc/ssl/certs
	I1009 18:28:21.161525  335435 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:28:21.169756  335435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem --> /etc/ssl/certs/1440942.pem (1708 bytes)
	I1009 18:28:21.197617  335435 start.go:303] post-start completed in 150.314441ms
	I1009 18:28:21.197997  335435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-729726
	I1009 18:28:21.218345  335435 profile.go:148] Saving config to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/stopped-upgrade-729726/config.json ...
	I1009 18:28:21.218638  335435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:28:21.218680  335435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-729726
	I1009 18:28:21.239053  335435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/stopped-upgrade-729726/id_rsa Username:docker}
	I1009 18:28:21.327875  335435 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:28:21.332485  335435 start.go:128] duration metric: createHost completed in 11.190379079s
	I1009 18:28:21.332501  335435 start.go:83] releasing machines lock for "stopped-upgrade-729726", held for 11.190529177s
	I1009 18:28:21.332574  335435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-729726
	I1009 18:28:21.354546  335435 ssh_runner.go:195] Run: cat /version.json
	I1009 18:28:21.354567  335435 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:28:21.354593  335435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-729726
	I1009 18:28:21.354639  335435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-729726
	I1009 18:28:21.376375  335435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/stopped-upgrade-729726/id_rsa Username:docker}
	I1009 18:28:21.376840  335435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/stopped-upgrade-729726/id_rsa Username:docker}
	I1009 18:28:21.570253  335435 ssh_runner.go:195] Run: systemctl --version
	I1009 18:28:21.576022  335435 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 18:28:21.580553  335435 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1009 18:28:21.610907  335435 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1009 18:28:21.610980  335435 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:28:21.648932  335435 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1009 18:28:21.648951  335435 start.go:472] detecting cgroup driver to use...
	I1009 18:28:21.648980  335435 detect.go:199] detected "systemd" cgroup driver on host os
	I1009 18:28:21.649031  335435 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1009 18:28:21.664113  335435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 18:28:21.676614  335435 docker.go:203] disabling cri-docker service (if available) ...
	I1009 18:28:21.676665  335435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:28:21.692807  335435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:28:21.706960  335435 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:28:21.797361  335435 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:28:21.890247  335435 docker.go:219] disabling docker service ...
	I1009 18:28:21.890309  335435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:28:21.910938  335435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:28:21.923600  335435 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:28:22.014142  335435 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:28:22.112640  335435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:28:22.125462  335435 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:28:22.142895  335435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1009 18:28:22.159060  335435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1009 18:28:22.169807  335435 containerd.go:145] configuring containerd to use "systemd" as cgroup driver...
	I1009 18:28:22.169865  335435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1009 18:28:22.180428  335435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 18:28:22.191179  335435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1009 18:28:22.201401  335435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 18:28:22.212648  335435 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:28:22.222803  335435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1009 18:28:22.233651  335435 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:28:22.243076  335435 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:28:22.253816  335435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:28:22.016264  341627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 18:28:22.025798  341627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:28:22.034867  341627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1009 18:28:22.044087  341627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:28:22.055363  341627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:28:22.062843  341627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:28:22.152470  341627 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1009 18:28:22.253368  341627 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1009 18:28:22.253443  341627 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1009 18:28:22.257875  341627 start.go:563] Will wait 60s for crictl version
	I1009 18:28:22.257930  341627 ssh_runner.go:195] Run: which crictl
	I1009 18:28:22.262658  341627 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:28:22.295061  341627 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1009 18:28:22.295174  341627 ssh_runner.go:195] Run: containerd --version
	I1009 18:28:22.322191  341627 ssh_runner.go:195] Run: containerd --version
	I1009 18:28:22.349785  341627 out.go:179] * Preparing containerd 1.7.28 ...
	I1009 18:28:22.351108  341627 ssh_runner.go:195] Run: rm -f paused
	I1009 18:28:22.356194  341627 out.go:179] * Done! minikube is ready without Kubernetes!
	I1009 18:28:22.358761  341627 out.go:203] ╭──────────────────────────────────────────────────────────╮
	│                                                          │
	│          * Things to try without Kubernetes ...          │
	│                                                          │
	│    - "minikube ssh" to SSH into minikube's node.         │
	│    - "minikube image" to build images without docker.    │
	│                                                          │
	╰──────────────────────────────────────────────────────────╯
	I1009 18:28:20.058401  340368 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-701596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:28:20.079994  340368 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 18:28:20.084756  340368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:28:20.098572  340368 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-701596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-701596 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:28:20.098727  340368 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1009 18:28:20.098797  340368 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:28:20.129790  340368 containerd.go:627] all images are preloaded for containerd runtime.
	I1009 18:28:20.129817  340368 containerd.go:534] Images already preloaded, skipping extraction
	I1009 18:28:20.129883  340368 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:28:20.161987  340368 containerd.go:627] all images are preloaded for containerd runtime.
	I1009 18:28:20.162007  340368 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:28:20.162016  340368 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 containerd true true} ...
	I1009 18:28:20.162106  340368 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-701596 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-701596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:28:20.162176  340368 ssh_runner.go:195] Run: sudo crictl info
	I1009 18:28:20.197595  340368 cni.go:84] Creating CNI manager for ""
	I1009 18:28:20.197625  340368 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1009 18:28:20.197717  340368 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:28:20.197782  340368 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-701596 NodeName:kubernetes-upgrade-701596 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:28:20.197994  340368 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-701596"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:28:20.198077  340368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1009 18:28:20.209070  340368 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:28:20.209159  340368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:28:20.218347  340368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1009 18:28:20.238709  340368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:28:20.254343  340368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2178 bytes)
	I1009 18:28:20.270354  340368 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:28:20.274555  340368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:28:20.288353  340368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:28:20.411510  340368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:28:20.439054  340368 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596 for IP: 192.168.76.2
	I1009 18:28:20.439079  340368 certs.go:195] generating shared ca certs ...
	I1009 18:28:20.439102  340368 certs.go:227] acquiring lock for ca certs: {Name:mk886b151c2ee368fca29ea3aee2e1e334a9b55c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:20.439638  340368 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-140450/.minikube/ca.key
	I1009 18:28:20.439777  340368 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-140450/.minikube/proxy-client-ca.key
	I1009 18:28:20.439824  340368 certs.go:257] generating profile certs ...
	I1009 18:28:20.439940  340368 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/client.key
	I1009 18:28:20.440011  340368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/client.crt with IP's: []
	I1009 18:28:20.918609  340368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/client.crt ...
	I1009 18:28:20.918646  340368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/client.crt: {Name:mke63eff4621790fb9613d101045ddd5ef8b433f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:20.918841  340368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/client.key ...
	I1009 18:28:20.918865  340368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/client.key: {Name:mkc0b317bfe82bf79a00685c75590df3337845d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:20.918988  340368 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.key.59c826b3
	I1009 18:28:20.919011  340368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.crt.59c826b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1009 18:28:21.187031  340368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.crt.59c826b3 ...
	I1009 18:28:21.187061  340368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.crt.59c826b3: {Name:mke7e24ad9abf21bcd1fd5a13807745c3519a23e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:21.187273  340368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.key.59c826b3 ...
	I1009 18:28:21.187302  340368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.key.59c826b3: {Name:mk47f7d9887d78713120b9f3236f0f5f1523adc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:21.187433  340368 certs.go:382] copying /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.crt.59c826b3 -> /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.crt
	I1009 18:28:21.187557  340368 certs.go:386] copying /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.key.59c826b3 -> /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.key
	I1009 18:28:21.188307  340368 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/proxy-client.key
	I1009 18:28:21.188336  340368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/proxy-client.crt with IP's: []
	I1009 18:28:21.603655  340368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/proxy-client.crt ...
	I1009 18:28:21.603681  340368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/proxy-client.crt: {Name:mk91c0b64c87af88f04bf404fd81f8baa12d700a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:21.603850  340368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/proxy-client.key ...
	I1009 18:28:21.603873  340368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/proxy-client.key: {Name:mk9549ae414f5a458797b1ddd3d4310db3c43aef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:21.604103  340368 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/144094.pem (1338 bytes)
	W1009 18:28:21.604177  340368 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-140450/.minikube/certs/144094_empty.pem, impossibly tiny 0 bytes
	I1009 18:28:21.604193  340368 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:28:21.604283  340368 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/ca.pem (1078 bytes)
	I1009 18:28:21.604327  340368 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:28:21.604360  340368 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/certs/key.pem (1675 bytes)
	I1009 18:28:21.604413  340368 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem (1708 bytes)
	I1009 18:28:21.605176  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:28:21.622646  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 18:28:21.650427  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:28:21.670709  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:28:21.689457  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1009 18:28:21.707517  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 18:28:21.727603  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:28:21.754253  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/kubernetes-upgrade-701596/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 18:28:21.774032  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/certs/144094.pem --> /usr/share/ca-certificates/144094.pem (1338 bytes)
	I1009 18:28:21.798273  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/ssl/certs/1440942.pem --> /usr/share/ca-certificates/1440942.pem (1708 bytes)
	I1009 18:28:21.816038  340368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-140450/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:28:21.839017  340368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:28:21.851618  340368 ssh_runner.go:195] Run: openssl version
	I1009 18:28:21.858281  340368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144094.pem && ln -fs /usr/share/ca-certificates/144094.pem /etc/ssl/certs/144094.pem"
	I1009 18:28:21.868896  340368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144094.pem
	I1009 18:28:21.878621  340368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:03 /usr/share/ca-certificates/144094.pem
	I1009 18:28:21.878700  340368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144094.pem
	I1009 18:28:21.915844  340368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144094.pem /etc/ssl/certs/51391683.0"
	I1009 18:28:21.925107  340368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1440942.pem && ln -fs /usr/share/ca-certificates/1440942.pem /etc/ssl/certs/1440942.pem"
	I1009 18:28:21.933728  340368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1440942.pem
	I1009 18:28:21.937809  340368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:03 /usr/share/ca-certificates/1440942.pem
	I1009 18:28:21.937863  340368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1440942.pem
	I1009 18:28:21.982716  340368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1440942.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:28:21.992499  340368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:28:22.002919  340368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:22.007490  340368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:22.007541  340368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:22.062433  340368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:28:22.070875  340368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:28:22.074358  340368 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:28:22.074423  340368 kubeadm.go:400] StartCluster: {Name:kubernetes-upgrade-701596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-701596 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:28:22.074505  340368 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1009 18:28:22.074550  340368 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:28:22.110154  340368 cri.go:89] found id: ""
	I1009 18:28:22.110231  340368 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:28:22.118813  340368 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:28:22.126745  340368 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:28:22.126802  340368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:28:22.134400  340368 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:28:22.134422  340368 kubeadm.go:157] found existing configuration files:
	
	I1009 18:28:22.134463  340368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:28:22.141808  340368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:28:22.141858  340368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:28:22.148897  340368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:28:22.156863  340368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:28:22.156916  340368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:28:22.164762  340368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:28:22.173014  340368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:28:22.173075  340368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:28:22.180346  340368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:28:22.187939  340368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:28:22.187997  340368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:28:22.195683  340368 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:28:22.244219  340368 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1009 18:28:22.244299  340368 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:28:22.287063  340368 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:28:22.287194  340368 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:28:22.287233  340368 kubeadm.go:318] OS: Linux
	I1009 18:28:22.287268  340368 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:28:22.287321  340368 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:28:22.287384  340368 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:28:22.287449  340368 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:28:22.287556  340368 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:28:22.287645  340368 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:28:22.287710  340368 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:28:22.287799  340368 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:28:22.376864  340368 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:28:22.377002  340368 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:28:22.377258  340368 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 18:28:22.547559  340368 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:28:22.324919  335435 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1009 18:28:22.447664  335435 start.go:519] Will wait 60s for socket path /run/containerd/containerd.sock
	I1009 18:28:22.447746  335435 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1009 18:28:22.452401  335435 start.go:540] Will wait 60s for crictl version
	I1009 18:28:22.452459  335435 ssh_runner.go:195] Run: which crictl
	I1009 18:28:22.457435  335435 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 18:28:22.506985  335435 start.go:556] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.24
	RuntimeApiVersion:  v1
	I1009 18:28:22.507084  335435 ssh_runner.go:195] Run: containerd --version
	I1009 18:28:22.536243  335435 ssh_runner.go:195] Run: containerd --version
	I1009 18:28:22.567259  335435 out.go:177] * Preparing Kubernetes v1.28.3 on containerd 1.6.24 ...
	I1009 18:28:22.550082  340368 out.go:252]   - Generating certificates and keys ...
	I1009 18:28:22.550192  340368 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:28:22.550307  340368 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:28:22.788609  340368 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:28:23.348103  340368 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:28:23.499549  340368 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:28:23.637959  340368 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 18:28:23.743424  340368 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 18:28:23.743637  340368 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-701596 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1009 18:28:24.090444  340368 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 18:28:24.090722  340368 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-701596 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247167716Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247231782Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247253194Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247265141Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247281744Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247294672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247310309Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247323429Z" level=info msg="NRI interface is disabled by configuration."
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247337048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247679655Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRunti
meSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath:/etc/containerd/certs.d Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.9 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingH
ugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247746746Z" level=info msg="Connect containerd service"
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247795876Z" level=info msg="using legacy CRI server"
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247807961Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.247953040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.248732612Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.248907763Z" level=info msg="Start subscribing containerd event"
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.249253109Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.249475409Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.249583087Z" level=info msg="Start recovering state"
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.250277259Z" level=info msg="Start event monitor"
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.250378167Z" level=info msg="Start snapshots syncer"
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.250406899Z" level=info msg="Start cni network conf syncer for default"
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.250428939Z" level=info msg="Start streaming server"
	Oct 09 18:28:22 NoKubernetes-847951 containerd[658]: time="2025-10-09T18:28:22.250583911Z" level=info msg="containerd successfully booted in 0.041875s"
	Oct 09 18:28:22 NoKubernetes-847951 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	sudo: /var/lib/minikube/binaries/v0.0.0/kubectl: command not found
	
	
	==> dmesg <==
	[Oct 9 17:17] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001883] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.081021] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.375327] i8042: Warning: Keylock active
	[  +0.011676] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003214] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000906] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000935] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.001129] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000664] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000730] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000835] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.448086] block sda: the capability attribute has been deprecated.
	[  +0.076799] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.019944] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.638606] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:28:24 up  1:10,  0 user,  load average: 6.82, 2.65, 10.81
	Linux NoKubernetes-847951 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p NoKubernetes-847951 -n NoKubernetes-847951
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p NoKubernetes-847951 -n NoKubernetes-847951: exit status 6 (317.507061ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:28:25.113476  348530 status.go:458] kubeconfig endpoint: get endpoint: "NoKubernetes-847951" does not appear in /home/jenkins/minikube-integration/21139-140450/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "NoKubernetes-847951" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (2.75s)

                                                
                                    

Test pass (307/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 17.11
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 11.84
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.39
21 TestBinaryMirror 0.81
22 TestOffline 56.09
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 158.54
29 TestAddons/serial/Volcano 39.3
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.47
35 TestAddons/parallel/Registry 30.73
36 TestAddons/parallel/RegistryCreds 0.68
37 TestAddons/parallel/Ingress 21.23
38 TestAddons/parallel/InspektorGadget 5.27
39 TestAddons/parallel/MetricsServer 6.64
41 TestAddons/parallel/CSI 70.55
42 TestAddons/parallel/Headlamp 37.49
43 TestAddons/parallel/CloudSpanner 5.52
44 TestAddons/parallel/LocalPath 56.71
45 TestAddons/parallel/NvidiaDevicePlugin 6.53
46 TestAddons/parallel/Yakd 10.74
47 TestAddons/parallel/AmdGpuDevicePlugin 6.5
48 TestAddons/StoppedEnableDisable 12.53
49 TestCertOptions 24.63
50 TestCertExpiration 218.95
52 TestForceSystemdFlag 26.78
53 TestForceSystemdEnv 32.58
54 TestDockerEnvContainerd 38.59
55 TestKVMDriverInstallOrUpdate 1.15
59 TestErrorSpam/setup 20.98
60 TestErrorSpam/start 0.62
61 TestErrorSpam/status 0.93
62 TestErrorSpam/pause 1.43
63 TestErrorSpam/unpause 1.5
64 TestErrorSpam/stop 1.41
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 38.9
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.04
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.89
76 TestFunctional/serial/CacheCmd/cache/add_local 2.01
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.53
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 45.84
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.22
87 TestFunctional/serial/LogsFileCmd 1.25
88 TestFunctional/serial/InvalidService 4.16
90 TestFunctional/parallel/ConfigCmd 0.37
91 TestFunctional/parallel/DashboardCmd 15.1
92 TestFunctional/parallel/DryRun 0.48
93 TestFunctional/parallel/InternationalLanguage 0.21
94 TestFunctional/parallel/StatusCmd 1.01
98 TestFunctional/parallel/ServiceCmdConnect 8.54
99 TestFunctional/parallel/AddonsCmd 0.16
100 TestFunctional/parallel/PersistentVolumeClaim 39.82
102 TestFunctional/parallel/SSHCmd 0.57
103 TestFunctional/parallel/CpCmd 1.82
104 TestFunctional/parallel/MySQL 27.94
105 TestFunctional/parallel/FileSync 0.31
106 TestFunctional/parallel/CertSync 1.77
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
114 TestFunctional/parallel/License 0.49
115 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/Version/components 0.56
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
121 TestFunctional/parallel/ImageCommands/ImageBuild 4.39
122 TestFunctional/parallel/ImageCommands/Setup 1.95
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
126 TestFunctional/parallel/ServiceCmd/DeployApp 8.18
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.17
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.97
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.87
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.44
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.56
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
135 TestFunctional/parallel/ProfileCmd/profile_list 0.38
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
137 TestFunctional/parallel/ServiceCmd/List 0.34
139 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.42
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.38
141 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.23
144 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
145 TestFunctional/parallel/ServiceCmd/Format 0.36
146 TestFunctional/parallel/ServiceCmd/URL 0.37
147 TestFunctional/parallel/MountCmd/any-port 20.81
148 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
149 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
154 TestFunctional/parallel/MountCmd/specific-port 1.81
155 TestFunctional/parallel/MountCmd/VerifyCleanup 1.77
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 159.76
164 TestMultiControlPlane/serial/DeployApp 6.12
165 TestMultiControlPlane/serial/PingHostFromPods 1.07
166 TestMultiControlPlane/serial/AddWorkerNode 23.61
167 TestMultiControlPlane/serial/NodeLabels 0.08
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
169 TestMultiControlPlane/serial/CopyFile 17.06
170 TestMultiControlPlane/serial/StopSecondaryNode 12.63
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
172 TestMultiControlPlane/serial/RestartSecondaryNode 9.11
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.9
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 94.49
175 TestMultiControlPlane/serial/DeleteSecondaryNode 9.13
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
177 TestMultiControlPlane/serial/StopCluster 35.85
178 TestMultiControlPlane/serial/RestartCluster 53.37
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
180 TestMultiControlPlane/serial/AddSecondaryNode 44.07
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.89
185 TestJSONOutput/start/Command 38.64
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.69
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.58
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.72
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.21
210 TestKicCustomNetwork/create_custom_network 34.15
211 TestKicCustomNetwork/use_default_bridge_network 23.43
212 TestKicExistingNetwork 23.4
213 TestKicCustomSubnet 25.57
214 TestKicStaticIP 25.52
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 51.1
219 TestMountStart/serial/StartWithMountFirst 4.84
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 5.37
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.66
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.2
226 TestMountStart/serial/RestartStopped 8.03
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 64.62
231 TestMultiNode/serial/DeployApp2Nodes 5.32
232 TestMultiNode/serial/PingHostFrom2Pods 0.74
233 TestMultiNode/serial/AddNode 24.12
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.69
236 TestMultiNode/serial/CopyFile 9.93
237 TestMultiNode/serial/StopNode 2.25
238 TestMultiNode/serial/StartAfterStop 7.15
239 TestMultiNode/serial/RestartKeepsNodes 68.65
240 TestMultiNode/serial/DeleteNode 5.14
241 TestMultiNode/serial/StopMultiNode 23.85
242 TestMultiNode/serial/RestartMultiNode 48.32
243 TestMultiNode/serial/ValidateNameConflict 23.3
248 TestPreload 116.68
250 TestScheduledStopUnix 99.99
253 TestInsufficientStorage 9.66
254 TestRunningBinaryUpgrade 51.77
256 TestKubernetesUpgrade 143.87
257 TestMissingContainerUpgrade 143.31
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
263 TestNoKubernetes/serial/StartWithK8s 33.48
268 TestNetworkPlugins/group/false 7.85
272 TestStoppedBinaryUpgrade/Setup 3.04
273 TestStoppedBinaryUpgrade/Upgrade 86.88
274 TestNoKubernetes/serial/StartWithStopK8s 25.37
275 TestNoKubernetes/serial/Start 10.44
277 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
278 TestNoKubernetes/serial/ProfileList 6.48
279 TestNoKubernetes/serial/Stop 1.29
280 TestNoKubernetes/serial/StartNoArgs 6.9
281 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
282 TestStoppedBinaryUpgrade/MinikubeLogs 1.29
291 TestPause/serial/Start 45.97
292 TestNetworkPlugins/group/auto/Start 44.04
293 TestNetworkPlugins/group/kindnet/Start 42.02
294 TestPause/serial/SecondStartNoReconfiguration 6.18
295 TestPause/serial/Pause 0.68
296 TestPause/serial/VerifyStatus 0.32
297 TestPause/serial/Unpause 0.63
298 TestPause/serial/PauseAgain 0.72
299 TestPause/serial/DeletePaused 2.79
300 TestPause/serial/VerifyDeletedResources 1.93
301 TestNetworkPlugins/group/calico/Start 45.29
302 TestNetworkPlugins/group/auto/KubeletFlags 0.31
303 TestNetworkPlugins/group/auto/NetCatPod 8.22
304 TestNetworkPlugins/group/auto/DNS 0.16
305 TestNetworkPlugins/group/auto/Localhost 0.13
306 TestNetworkPlugins/group/auto/HairPin 0.13
307 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
308 TestNetworkPlugins/group/kindnet/KubeletFlags 0.43
309 TestNetworkPlugins/group/kindnet/NetCatPod 9.23
310 TestNetworkPlugins/group/kindnet/DNS 0.16
311 TestNetworkPlugins/group/kindnet/Localhost 0.14
312 TestNetworkPlugins/group/kindnet/HairPin 0.13
313 TestNetworkPlugins/group/custom-flannel/Start 51.6
314 TestNetworkPlugins/group/calico/ControllerPod 6.01
315 TestNetworkPlugins/group/calico/KubeletFlags 0.32
316 TestNetworkPlugins/group/calico/NetCatPod 9.24
317 TestNetworkPlugins/group/enable-default-cni/Start 37.51
318 TestNetworkPlugins/group/calico/DNS 0.14
319 TestNetworkPlugins/group/calico/Localhost 0.11
320 TestNetworkPlugins/group/calico/HairPin 0.1
321 TestNetworkPlugins/group/flannel/Start 57.89
322 TestNetworkPlugins/group/bridge/Start 41.07
323 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.39
324 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.24
325 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
326 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.23
327 TestNetworkPlugins/group/custom-flannel/DNS 0.15
328 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
329 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
330 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
331 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
332 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
334 TestStartStop/group/old-k8s-version/serial/FirstStart 52.24
335 TestNetworkPlugins/group/bridge/KubeletFlags 0.41
336 TestNetworkPlugins/group/bridge/NetCatPod 9.96
338 TestStartStop/group/no-preload/serial/FirstStart 56.2
339 TestNetworkPlugins/group/flannel/ControllerPod 6.01
340 TestNetworkPlugins/group/bridge/DNS 0.15
341 TestNetworkPlugins/group/bridge/Localhost 0.13
342 TestNetworkPlugins/group/bridge/HairPin 0.14
343 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
344 TestNetworkPlugins/group/flannel/NetCatPod 8.26
345 TestNetworkPlugins/group/flannel/DNS 0.15
346 TestNetworkPlugins/group/flannel/Localhost 0.17
347 TestNetworkPlugins/group/flannel/HairPin 0.16
349 TestStartStop/group/embed-certs/serial/FirstStart 43.34
351 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 41.74
352 TestStartStop/group/old-k8s-version/serial/DeployApp 9.4
353 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.02
354 TestStartStop/group/no-preload/serial/DeployApp 9.27
355 TestStartStop/group/old-k8s-version/serial/Stop 12.12
356 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.81
357 TestStartStop/group/no-preload/serial/Stop 11.97
358 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
359 TestStartStop/group/old-k8s-version/serial/SecondStart 49.54
360 TestStartStop/group/embed-certs/serial/DeployApp 10.23
361 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
362 TestStartStop/group/no-preload/serial/SecondStart 45.33
363 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.83
364 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.31
365 TestStartStop/group/embed-certs/serial/Stop 12.45
366 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.81
367 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.27
368 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
369 TestStartStop/group/embed-certs/serial/SecondStart 48.65
370 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
371 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 45.01
372 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
373 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
374 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
375 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
376 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
377 TestStartStop/group/old-k8s-version/serial/Pause 2.82
378 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
379 TestStartStop/group/no-preload/serial/Pause 2.94
381 TestStartStop/group/newest-cni/serial/FirstStart 25.78
382 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
383 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
384 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
385 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
386 TestStartStop/group/embed-certs/serial/Pause 2.78
387 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
388 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
389 TestStartStop/group/newest-cni/serial/DeployApp 0
390 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.81
391 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.89
392 TestStartStop/group/newest-cni/serial/Stop 1.25
393 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
394 TestStartStop/group/newest-cni/serial/SecondStart 11.73
395 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
396 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
397 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
398 TestStartStop/group/newest-cni/serial/Pause 2.46
x
+
TestDownloadOnly/v1.28.0/json-events (17.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-776711 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-776711 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (17.111409043s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (17.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1009 17:57:09.915752  144094 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1009 17:57:09.915850  144094 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-140450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-776711
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-776711: exit status 85 (66.843349ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-776711 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-776711 │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 17:56:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 17:56:52.847556  144107 out.go:360] Setting OutFile to fd 1 ...
	I1009 17:56:52.847893  144107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 17:56:52.847902  144107 out.go:374] Setting ErrFile to fd 2...
	I1009 17:56:52.847906  144107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 17:56:52.848093  144107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-140450/.minikube/bin
	W1009 17:56:52.848266  144107 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21139-140450/.minikube/config/config.json: open /home/jenkins/minikube-integration/21139-140450/.minikube/config/config.json: no such file or directory
	I1009 17:56:52.848779  144107 out.go:368] Setting JSON to true
	I1009 17:56:52.850211  144107 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2353,"bootTime":1760030260,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 17:56:52.850316  144107 start.go:141] virtualization: kvm guest
	I1009 17:56:52.852353  144107 out.go:99] [download-only-776711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1009 17:56:52.852476  144107 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21139-140450/.minikube/cache/preloaded-tarball: no such file or directory
	I1009 17:56:52.852535  144107 notify.go:220] Checking for updates...
	I1009 17:56:52.853842  144107 out.go:171] MINIKUBE_LOCATION=21139
	I1009 17:56:52.855089  144107 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 17:56:52.856344  144107 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21139-140450/kubeconfig
	I1009 17:56:52.857712  144107 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-140450/.minikube
	I1009 17:56:52.859183  144107 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1009 17:56:52.861208  144107 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 17:56:52.861415  144107 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 17:56:52.883931  144107 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 17:56:52.884013  144107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 17:56:53.197363  144107 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-09 17:56:53.186682605 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 17:56:53.197551  144107 docker.go:318] overlay module found
	I1009 17:56:53.199217  144107 out.go:99] Using the docker driver based on user configuration
	I1009 17:56:53.199262  144107 start.go:305] selected driver: docker
	I1009 17:56:53.199273  144107 start.go:925] validating driver "docker" against <nil>
	I1009 17:56:53.199382  144107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 17:56:53.255988  144107 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-09 17:56:53.245286252 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 17:56:53.256152  144107 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 17:56:53.256896  144107 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1009 17:56:53.257160  144107 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 17:56:53.258748  144107 out.go:171] Using Docker driver with root privileges
	I1009 17:56:53.259973  144107 cni.go:84] Creating CNI manager for ""
	I1009 17:56:53.260049  144107 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1009 17:56:53.260061  144107 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 17:56:53.260152  144107 start.go:349] cluster config:
	{Name:download-only-776711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-776711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 17:56:53.261409  144107 out.go:99] Starting "download-only-776711" primary control-plane node in "download-only-776711" cluster
	I1009 17:56:53.261431  144107 cache.go:133] Beginning downloading kic base image for docker with containerd
	I1009 17:56:53.262544  144107 out.go:99] Pulling base image v0.0.48-1759745255-21703 ...
	I1009 17:56:53.262578  144107 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1009 17:56:53.262618  144107 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 17:56:53.278745  144107 cache.go:162] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1009 17:56:53.279336  144107 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1009 17:56:53.279457  144107 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1009 17:56:53.370294  144107 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1009 17:56:53.370328  144107 cache.go:64] Caching tarball of preloaded images
	I1009 17:56:53.370970  144107 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1009 17:56:53.372713  144107 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1009 17:56:53.372747  144107 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1009 17:56:53.489492  144107 preload.go:295] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1009 17:56:53.489613  144107 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/21139-140450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1009 17:57:06.323013  144107 cache.go:67] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1009 17:57:06.323537  144107 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/download-only-776711/config.json ...
	I1009 17:57:06.323584  144107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/download-only-776711/config.json: {Name:mk3b16a8610072fc686cd5153750dcdd384f6f1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:57:06.324488  144107 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1009 17:57:06.324749  144107 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21139-140450/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-776711 host does not exist
	  To start a cluster, run: "minikube start -p download-only-776711"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-776711
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (11.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-379390 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-379390 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.840357651s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (11.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1009 17:57:22.198090  144094 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1009 17:57:22.198147  144094 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-140450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-379390
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-379390: exit status 85 (66.362368ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-776711 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-776711 │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 09 Oct 25 17:57 UTC │ 09 Oct 25 17:57 UTC │
	│ delete  │ -p download-only-776711                                                                                                                                                               │ download-only-776711 │ jenkins │ v1.37.0 │ 09 Oct 25 17:57 UTC │ 09 Oct 25 17:57 UTC │
	│ start   │ -o=json --download-only -p download-only-379390 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-379390 │ jenkins │ v1.37.0 │ 09 Oct 25 17:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 17:57:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 17:57:10.403041  144493 out.go:360] Setting OutFile to fd 1 ...
	I1009 17:57:10.403305  144493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 17:57:10.403313  144493 out.go:374] Setting ErrFile to fd 2...
	I1009 17:57:10.403317  144493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 17:57:10.403518  144493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-140450/.minikube/bin
	I1009 17:57:10.404001  144493 out.go:368] Setting JSON to true
	I1009 17:57:10.404849  144493 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2370,"bootTime":1760030260,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 17:57:10.404952  144493 start.go:141] virtualization: kvm guest
	I1009 17:57:10.406940  144493 out.go:99] [download-only-379390] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 17:57:10.407139  144493 notify.go:220] Checking for updates...
	I1009 17:57:10.408309  144493 out.go:171] MINIKUBE_LOCATION=21139
	I1009 17:57:10.409488  144493 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 17:57:10.410701  144493 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21139-140450/kubeconfig
	I1009 17:57:10.411880  144493 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-140450/.minikube
	I1009 17:57:10.412931  144493 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1009 17:57:10.415162  144493 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 17:57:10.415408  144493 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 17:57:10.439749  144493 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 17:57:10.439808  144493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 17:57:10.496659  144493 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-09 17:57:10.486433078 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 17:57:10.496768  144493 docker.go:318] overlay module found
	I1009 17:57:10.498216  144493 out.go:99] Using the docker driver based on user configuration
	I1009 17:57:10.498251  144493 start.go:305] selected driver: docker
	I1009 17:57:10.498260  144493 start.go:925] validating driver "docker" against <nil>
	I1009 17:57:10.498344  144493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 17:57:10.556077  144493 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-09 17:57:10.546373671 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 17:57:10.556262  144493 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 17:57:10.556757  144493 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1009 17:57:10.556893  144493 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 17:57:10.558532  144493 out.go:171] Using Docker driver with root privileges
	I1009 17:57:10.559599  144493 cni.go:84] Creating CNI manager for ""
	I1009 17:57:10.559662  144493 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1009 17:57:10.559676  144493 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 17:57:10.559741  144493 start.go:349] cluster config:
	{Name:download-only-379390 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-379390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 17:57:10.561052  144493 out.go:99] Starting "download-only-379390" primary control-plane node in "download-only-379390" cluster
	I1009 17:57:10.561075  144493 cache.go:133] Beginning downloading kic base image for docker with containerd
	I1009 17:57:10.562448  144493 out.go:99] Pulling base image v0.0.48-1759745255-21703 ...
	I1009 17:57:10.562504  144493 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1009 17:57:10.562595  144493 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 17:57:10.579403  144493 cache.go:162] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1009 17:57:10.579548  144493 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1009 17:57:10.579568  144493 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
	I1009 17:57:10.579572  144493 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
	I1009 17:57:10.579586  144493 cache.go:165] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	I1009 17:57:10.665091  144493 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1009 17:57:10.665147  144493 cache.go:64] Caching tarball of preloaded images
	I1009 17:57:10.665871  144493 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1009 17:57:10.667375  144493 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1009 17:57:10.667388  144493 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1009 17:57:10.777783  144493 preload.go:295] Got checksum from GCS API "5d6e976daeaa84851976fc4d674fd8f4"
	I1009 17:57:10.777840  144493 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:5d6e976daeaa84851976fc4d674fd8f4 -> /home/jenkins/minikube-integration/21139-140450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-379390 host does not exist
	  To start a cluster, run: "minikube start -p download-only-379390"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-379390
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.39s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-248588 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-248588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-248588
--- PASS: TestDownloadOnlyKic (0.39s)

                                                
                                    
x
+
TestBinaryMirror (0.81s)

                                                
                                                
=== RUN   TestBinaryMirror
I1009 17:57:23.291040  144094 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-031294 --alsologtostderr --binary-mirror http://127.0.0.1:41937 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-031294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-031294
--- PASS: TestBinaryMirror (0.81s)

                                                
                                    
x
+
TestOffline (56.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-818450 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-818450 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (46.065581722s)
helpers_test.go:175: Cleaning up "offline-containerd-818450" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-818450
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-818450: (10.021784927s)
--- PASS: TestOffline (56.09s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-072257
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-072257: exit status 85 (63.725775ms)

                                                
                                                
-- stdout --
	* Profile "addons-072257" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-072257"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-072257
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-072257: exit status 85 (63.911305ms)

                                                
                                                
-- stdout --
	* Profile "addons-072257" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-072257"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (158.54s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-072257 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-072257 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m38.535622475s)
--- PASS: TestAddons/Setup (158.54s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 15.148494ms
addons_test.go:868: volcano-scheduler stabilized in 15.184385ms
addons_test.go:876: volcano-admission stabilized in 15.232304ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-zp6rj" [633e9120-2019-434a-b802-2e63ddcec6bd] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.00276559s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-l4snt" [5f37d2d2-566d-4960-8838-30d940b57af3] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004641297s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-rknr6" [42104e9a-8517-4b16-b3e4-b9162e230a23] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003810939s
addons_test.go:903: (dbg) Run:  kubectl --context addons-072257 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-072257 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-072257 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [cc8eafee-ae93-4240-b88e-ffa22658389e] Pending
helpers_test.go:352: "test-job-nginx-0" [cc8eafee-ae93-4240-b88e-ffa22658389e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [cc8eafee-ae93-4240-b88e-ffa22658389e] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.004404189s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-072257 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-072257 addons disable volcano --alsologtostderr -v=1: (11.936377864s)
--- PASS: TestAddons/serial/Volcano (39.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-072257 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-072257 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.47s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-072257 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-072257 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8511f8f2-31d3-4f77-b6db-03cddfd90877] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8511f8f2-31d3-4f77-b6db-03cddfd90877] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.00367739s
addons_test.go:694: (dbg) Run:  kubectl --context addons-072257 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-072257 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-072257 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (30.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.104844ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-mmc6v" [bb84c199-f2f3-417d-b27f-dc7819e3a198] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002350488s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-fpv6l" [6f40ffba-332d-4045-9c2a-53da202cf3a0] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003303939s
addons_test.go:392: (dbg) Run:  kubectl --context addons-072257 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-072257 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-072257 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (19.896447336s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-072257 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-072257 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (30.73s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.68s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.666542ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-072257
addons_test.go:332: (dbg) Run:  kubectl --context addons-072257 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-072257 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.68s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-072257 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-072257 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-072257 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [0c7526e0-216c-42cd-9461-2c36abe6d66e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [0c7526e0-216c-42cd-9461-2c36abe6d66e] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003251002s
I1009 18:01:53.452406  144094 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-072257 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-072257 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-072257 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-072257 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-072257 addons disable ingress-dns --alsologtostderr -v=1: (1.330482682s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-072257 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-072257 addons disable ingress --alsologtostderr -v=1: (7.721875767s)
--- PASS: TestAddons/parallel/Ingress (21.23s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-2qlpm" [a8cec915-94fe-4b09-a338-24e9da9511a9] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003702631s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-072257 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.64s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 14.479872ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-fx94z" [2a54701d-f2af-4482-8739-b7719c72c371] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003454393s
addons_test.go:463: (dbg) Run:  kubectl --context addons-072257 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-072257 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.64s)

                                                
                                    
x
+
TestAddons/parallel/CSI (70.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.746642ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-072257 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-072257 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [2ec36d7a-e178-46e8-8867-15fa49df741b] Pending
helpers_test.go:352: "task-pv-pod" [2ec36d7a-e178-46e8-8867-15fa49df741b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [2ec36d7a-e178-46e8-8867-15fa49df741b] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 21.003526437s
addons_test.go:572: (dbg) Run:  kubectl --context addons-072257 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-072257 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-072257 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-072257 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-072257 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-072257 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-072257 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [b6ee0e2e-4bc8-4c74-ae04-b147e9bae81c] Pending
helpers_test.go:352: "task-pv-pod-restore" [b6ee0e2e-4bc8-4c74-ae04-b147e9bae81c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [b6ee0e2e-4bc8-4c74-ae04-b147e9bae81c] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004167622s
addons_test.go:614: (dbg) Run:  kubectl --context addons-072257 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-072257 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-072257 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-072257 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-072257 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-072257 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.549692398s)
--- PASS: TestAddons/parallel/CSI (70.55s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (37.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-072257 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-5jvx8" [07a0862f-45c9-4312-87d6-5498b4351c01] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-5jvx8" [07a0862f-45c9-4312-87d6-5498b4351c01] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 31.003276173s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-072257 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-072257 addons disable headlamp --alsologtostderr -v=1: (5.694119539s)
--- PASS: TestAddons/parallel/Headlamp (37.49s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-w7wn8" [ce233ada-620e-4e7c-ac9b-9cc868e2148d] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004314261s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-072257 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.71s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-072257 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-072257 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc test-pvc -o jsonpath={.status.phase} -n default
2025/10/09 18:01:30 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-072257 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [5aa96f3f-b5f7-4ee2-bd39-3f714aa1e888] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [5aa96f3f-b5f7-4ee2-bd39-3f714aa1e888] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [5aa96f3f-b5f7-4ee2-bd39-3f714aa1e888] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004124303s
addons_test.go:967: (dbg) Run:  kubectl --context addons-072257 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-072257 ssh "cat /opt/local-path-provisioner/pvc-1d5713d6-fcb9-44e8-b98b-10e46ea3a22f_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-072257 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-072257 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-072257 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-072257 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.752076849s)
--- PASS: TestAddons/parallel/LocalPath (56.71s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
I1009 18:01:00.531726  144094 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-gh6js" [1182a99d-42d8-47ba-83b6-7307ce791d80] Running
I1009 18:01:00.535425  144094 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1009 18:01:00.535447  144094 kapi.go:107] duration metric: took 3.736358ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003123726s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-072257 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-md98k" [9443595f-ac6a-4075-8ece-7946d0c3375f] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00428526s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-072257 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-072257 addons disable yakd --alsologtostderr -v=1: (5.730521182s)
--- PASS: TestAddons/parallel/Yakd (10.74s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-p4gsh" [970effaf-97ec-4379-81d7-5b81c6d0dcb9] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003665707s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-072257 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.50s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.53s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-072257
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-072257: (12.275092053s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-072257
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-072257
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-072257
--- PASS: TestAddons/StoppedEnableDisable (12.53s)

                                                
                                    
x
+
TestCertOptions (24.63s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-201875 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-201875 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (21.66213212s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-201875 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-201875 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-201875 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-201875" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-201875
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-201875: (2.335626852s)
--- PASS: TestCertOptions (24.63s)

                                                
                                    
x
+
TestCertExpiration (218.95s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-096647 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-096647 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (29.118016608s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-096647 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-096647 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.312492817s)
helpers_test.go:175: Cleaning up "cert-expiration-096647" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-096647
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-096647: (2.522501963s)
--- PASS: TestCertExpiration (218.95s)

                                                
                                    
x
+
TestForceSystemdFlag (26.78s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-711290 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-711290 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (24.527550487s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-711290 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-711290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-711290
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-711290: (1.940094495s)
--- PASS: TestForceSystemdFlag (26.78s)

                                                
                                    
x
+
TestForceSystemdEnv (32.58s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-855890 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-855890 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (29.68253511s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-855890 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-855890" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-855890
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-855890: (2.56262191s)
--- PASS: TestForceSystemdEnv (32.58s)

                                                
                                    
x
+
TestDockerEnvContainerd (38.59s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-501219 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-501219 --driver=docker  --container-runtime=containerd: (22.314985655s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-501219"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXscerLI/agent.170085" SSH_AGENT_PID="170086" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXscerLI/agent.170085" SSH_AGENT_PID="170086" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXscerLI/agent.170085" SSH_AGENT_PID="170086" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (2.041163918s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXscerLI/agent.170085" SSH_AGENT_PID="170086" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-501219" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-501219
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-501219: (2.276340757s)
--- PASS: TestDockerEnvContainerd (38.59s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.15s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1009 18:29:29.348429  144094 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1009 18:29:29.348608  144094 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate2779701936/001:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1009 18:29:29.377986  144094 install.go:163] /tmp/TestKVMDriverInstallOrUpdate2779701936/001/docker-machine-driver-kvm2 version is 1.1.1
W1009 18:29:29.378026  144094 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1009 18:29:29.378129  144094 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1009 18:29:29.378172  144094 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2779701936/001/docker-machine-driver-kvm2
I1009 18:29:30.350050  144094 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate2779701936/001:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1009 18:29:30.365388  144094 install.go:163] /tmp/TestKVMDriverInstallOrUpdate2779701936/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (1.15s)

                                                
                                    
x
+
TestErrorSpam/setup (20.98s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-055743 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-055743 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-055743 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-055743 --driver=docker  --container-runtime=containerd: (20.981322662s)
--- PASS: TestErrorSpam/setup (20.98s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055743 --log_dir /tmp/nospam-055743 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055743 --log_dir /tmp/nospam-055743 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055743 --log_dir /tmp/nospam-055743 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.93s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055743 --log_dir /tmp/nospam-055743 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055743 --log_dir /tmp/nospam-055743 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055743 --log_dir /tmp/nospam-055743 status
--- PASS: TestErrorSpam/status (0.93s)

                                                
                                    
x
+
TestErrorSpam/pause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055743 --log_dir /tmp/nospam-055743 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055743 --log_dir /tmp/nospam-055743 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055743 --log_dir /tmp/nospam-055743 pause
--- PASS: TestErrorSpam/pause (1.43s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055743 --log_dir /tmp/nospam-055743 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055743 --log_dir /tmp/nospam-055743 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055743 --log_dir /tmp/nospam-055743 unpause
--- PASS: TestErrorSpam/unpause (1.50s)

                                                
                                    
x
+
TestErrorSpam/stop (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055743 --log_dir /tmp/nospam-055743 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-055743 --log_dir /tmp/nospam-055743 stop: (1.223312138s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055743 --log_dir /tmp/nospam-055743 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-055743 --log_dir /tmp/nospam-055743 stop
--- PASS: TestErrorSpam/stop (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21139-140450/.minikube/files/etc/test/nested/copy/144094/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.9s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-686010 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-686010 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (38.90308877s)
--- PASS: TestFunctional/serial/StartWithProxy (38.90s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.04s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1009 18:04:28.444201  144094 config.go:182] Loaded profile config "functional-686010": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-686010 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-686010 --alsologtostderr -v=8: (6.037316464s)
functional_test.go:678: soft start took 6.038304959s for "functional-686010" cluster.
I1009 18:04:34.481891  144094 config.go:182] Loaded profile config "functional-686010": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.04s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-686010 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-686010 cache add registry.k8s.io/pause:3.3: (1.125946534s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-686010 /tmp/TestFunctionalserialCacheCmdcacheadd_local788264053/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 cache add minikube-local-cache-test:functional-686010
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-686010 cache add minikube-local-cache-test:functional-686010: (1.696001483s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 cache delete minikube-local-cache-test:functional-686010
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-686010
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686010 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (277.153984ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 kubectl -- --context functional-686010 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-686010 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.84s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-686010 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1009 18:05:02.707544  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:05:02.714064  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:05:02.725546  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:05:02.746954  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:05:02.788465  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:05:02.869931  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:05:03.031474  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:05:03.353154  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:05:03.995228  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:05:05.276858  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:05:07.839256  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:05:12.960678  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:05:23.202549  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-686010 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.843858509s)
functional_test.go:776: restart took 45.843995044s for "functional-686010" cluster.
I1009 18:05:27.579949  144094 config.go:182] Loaded profile config "functional-686010": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (45.84s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-686010 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-686010 logs: (1.215008455s)
--- PASS: TestFunctional/serial/LogsCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 logs --file /tmp/TestFunctionalserialLogsFileCmd2017276526/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-686010 logs --file /tmp/TestFunctionalserialLogsFileCmd2017276526/001/logs.txt: (1.249428171s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.16s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-686010 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-686010
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-686010: exit status 115 (372.962294ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31040 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-686010 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.16s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686010 config get cpus: exit status 14 (69.360069ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686010 config get cpus: exit status 14 (57.949078ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-686010 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-686010 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 191558: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.10s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-686010 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-686010 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (202.97084ms)

                                                
                                                
-- stdout --
	* [functional-686010] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-140450/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-140450/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:05:54.344005  191082 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:05:54.344366  191082 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:05:54.344381  191082 out.go:374] Setting ErrFile to fd 2...
	I1009 18:05:54.344388  191082 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:05:54.344690  191082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-140450/.minikube/bin
	I1009 18:05:54.345343  191082 out.go:368] Setting JSON to false
	I1009 18:05:54.346765  191082 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2894,"bootTime":1760030260,"procs":257,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:05:54.346929  191082 start.go:141] virtualization: kvm guest
	I1009 18:05:54.349852  191082 out.go:179] * [functional-686010] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:05:54.352005  191082 notify.go:220] Checking for updates...
	I1009 18:05:54.352048  191082 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:05:54.353355  191082 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:05:54.354846  191082 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-140450/kubeconfig
	I1009 18:05:54.356357  191082 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-140450/.minikube
	I1009 18:05:54.357628  191082 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:05:54.358816  191082 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:05:54.360433  191082 config.go:182] Loaded profile config "functional-686010": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1009 18:05:54.361170  191082 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:05:54.393472  191082 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:05:54.393586  191082 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:05:54.468525  191082 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-09 18:05:54.453604375 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:05:54.468726  191082 docker.go:318] overlay module found
	I1009 18:05:54.471533  191082 out.go:179] * Using the docker driver based on existing profile
	I1009 18:05:54.472828  191082 start.go:305] selected driver: docker
	I1009 18:05:54.472850  191082 start.go:925] validating driver "docker" against &{Name:functional-686010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-686010 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:05:54.472968  191082 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:05:54.474879  191082 out.go:203] 
	W1009 18:05:54.475994  191082 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1009 18:05:54.477373  191082 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-686010 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-686010 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-686010 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (205.100559ms)

                                                
                                                
-- stdout --
	* [functional-686010] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-140450/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-140450/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:05:54.825528  191295 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:05:54.825690  191295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:05:54.825703  191295 out.go:374] Setting ErrFile to fd 2...
	I1009 18:05:54.825710  191295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:05:54.826146  191295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-140450/.minikube/bin
	I1009 18:05:54.826818  191295 out.go:368] Setting JSON to false
	I1009 18:05:54.828060  191295 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2895,"bootTime":1760030260,"procs":257,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:05:54.828184  191295 start.go:141] virtualization: kvm guest
	I1009 18:05:54.830723  191295 out.go:179] * [functional-686010] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1009 18:05:54.831987  191295 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:05:54.831996  191295 notify.go:220] Checking for updates...
	I1009 18:05:54.834675  191295 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:05:54.836792  191295 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-140450/kubeconfig
	I1009 18:05:54.838101  191295 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-140450/.minikube
	I1009 18:05:54.840507  191295 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:05:54.842570  191295 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:05:54.844663  191295 config.go:182] Loaded profile config "functional-686010": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1009 18:05:54.845427  191295 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:05:54.875035  191295 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:05:54.875149  191295 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:05:54.952857  191295 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-09 18:05:54.941036341 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:05:54.953013  191295 docker.go:318] overlay module found
	I1009 18:05:54.955566  191295 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1009 18:05:54.956758  191295 start.go:305] selected driver: docker
	I1009 18:05:54.956776  191295 start.go:925] validating driver "docker" against &{Name:functional-686010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-686010 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:05:54.956905  191295 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:05:54.959043  191295 out.go:203] 
	W1009 18:05:54.960398  191295 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1009 18:05:54.961614  191295 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-686010 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-686010 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-2ttzn" [958e0113-d12d-432e-a76f-0e2e35c83678] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-2ttzn" [958e0113-d12d-432e-a76f-0e2e35c83678] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003887783s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32715
functional_test.go:1680: http://192.168.49.2:32715: success! body:
Request served by hello-node-connect-7d85dfc575-2ttzn

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32715
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (39.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [f58b1000-4a20-497e-b380-7309b79d7893] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004102193s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-686010 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-686010 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-686010 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-686010 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [7a42777e-c775-4e61-b60e-ddc79b8d9df5] Pending
helpers_test.go:352: "sp-pod" [7a42777e-c775-4e61-b60e-ddc79b8d9df5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [7a42777e-c775-4e61-b60e-ddc79b8d9df5] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.003073452s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-686010 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-686010 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-686010 delete -f testdata/storage-provisioner/pod.yaml: (2.00773067s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-686010 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [7dab6db3-2dd9-464a-adc3-efb279907aa7] Pending
helpers_test.go:352: "sp-pod" [7dab6db3-2dd9-464a-adc3-efb279907aa7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [7dab6db3-2dd9-464a-adc3-efb279907aa7] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.003571331s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-686010 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (39.82s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh -n functional-686010 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 cp functional-686010:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd861535399/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh -n functional-686010 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh -n functional-686010 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-686010 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-266jf" [4c97dadb-9c63-4508-bcac-5827bf771842] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-266jf" [4c97dadb-9c63-4508-bcac-5827bf771842] Running
I1009 18:06:01.488946  144094 detect.go:223] nested VM detected
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.004184816s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-686010 exec mysql-5bb876957f-266jf -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-686010 exec mysql-5bb876957f-266jf -- mysql -ppassword -e "show databases;": exit status 1 (196.271831ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1009 18:06:07.647159  144094 retry.go:31] will retry after 965.76054ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-686010 exec mysql-5bb876957f-266jf -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-686010 exec mysql-5bb876957f-266jf -- mysql -ppassword -e "show databases;": exit status 1 (115.1721ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1009 18:06:08.728934  144094 retry.go:31] will retry after 1.172558008s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-686010 exec mysql-5bb876957f-266jf -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-686010 exec mysql-5bb876957f-266jf -- mysql -ppassword -e "show databases;": exit status 1 (120.987822ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1009 18:06:10.023378  144094 retry.go:31] will retry after 3.043180831s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-686010 exec mysql-5bb876957f-266jf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.94s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/144094/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh "sudo cat /etc/test/nested/copy/144094/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/144094.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh "sudo cat /etc/ssl/certs/144094.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/144094.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh "sudo cat /usr/share/ca-certificates/144094.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1440942.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh "sudo cat /etc/ssl/certs/1440942.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1440942.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh "sudo cat /usr/share/ca-certificates/1440942.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-686010 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686010 ssh "sudo systemctl is-active docker": exit status 1 (319.119447ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686010 ssh "sudo systemctl is-active crio": exit status 1 (303.651913ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 version -o=json --components
2025/10/09 18:06:09 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/Version/components (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-686010 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-686010
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-686010
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-686010 image ls --format short --alsologtostderr:
I1009 18:06:10.476312  194590 out.go:360] Setting OutFile to fd 1 ...
I1009 18:06:10.476638  194590 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:06:10.476649  194590 out.go:374] Setting ErrFile to fd 2...
I1009 18:06:10.476653  194590 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:06:10.476915  194590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-140450/.minikube/bin
I1009 18:06:10.477795  194590 config.go:182] Loaded profile config "functional-686010": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1009 18:06:10.477968  194590 config.go:182] Loaded profile config "functional-686010": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1009 18:06:10.478569  194590 cli_runner.go:164] Run: docker container inspect functional-686010 --format={{.State.Status}}
I1009 18:06:10.501523  194590 ssh_runner.go:195] Run: systemctl --version
I1009 18:06:10.501595  194590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686010
I1009 18:06:10.523939  194590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/functional-686010/id_rsa Username:docker}
I1009 18:06:10.636826  194590 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-686010 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:c3994b │ 27.1MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:c80c8d │ 22.8MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ docker.io/kicbase/echo-server               │ functional-686010  │ sha256:9056ab │ 2.37MB │
│ docker.io/library/minikube-local-cache-test │ functional-686010  │ sha256:6a20e3 │ 991B   │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ docker.io/library/mysql                     │ 5.7                │ sha256:510733 │ 138MB  │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ docker.io/library/nginx                     │ latest             │ sha256:07ccdb │ 62.7MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:fc2517 │ 26MB   │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:7dd6aa │ 17.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ docker.io/library/nginx                     │ alpine             │ sha256:5e7abc │ 22.6MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-686010 image ls --format table --alsologtostderr:
I1009 18:06:10.985597  194921 out.go:360] Setting OutFile to fd 1 ...
I1009 18:06:10.985884  194921 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:06:10.985895  194921 out.go:374] Setting ErrFile to fd 2...
I1009 18:06:10.985899  194921 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:06:10.986115  194921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-140450/.minikube/bin
I1009 18:06:10.986957  194921 config.go:182] Loaded profile config "functional-686010": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1009 18:06:10.987077  194921 config.go:182] Loaded profile config "functional-686010": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1009 18:06:10.987636  194921 cli_runner.go:164] Run: docker container inspect functional-686010 --format={{.State.Status}}
I1009 18:06:11.008948  194921 ssh_runner.go:195] Run: systemctl --version
I1009 18:06:11.009037  194921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686010
I1009 18:06:11.027851  194921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/functional-686010/id_rsa Username:docker}
I1009 18:06:11.135921  194921 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-686010 image ls --format json --alsologtostderr:
[{"id":"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"17385568"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e8185
2a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"25963718"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:6a20e3fc0b5bf5a35f1283613311747aaac316c040322fe409c3e4f2e4f3e19d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-686010"],"size":"991"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-686010"],"size":"2372971"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371
452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"62706233"},{"id":"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"27061991"},{"id":"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.3
4.1"],"size":"22820214"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5","repoDigests":["do
cker.io/library/nginx@sha256:7c1b9a91514d1eb5288d7cd6e91d9f451707911bfaea9307a3acbc811d4aa82e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22596807"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-686010 image ls --format json --alsologtostderr:
I1009 18:06:10.744630  194773 out.go:360] Setting OutFile to fd 1 ...
I1009 18:06:10.744918  194773 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:06:10.744930  194773 out.go:374] Setting ErrFile to fd 2...
I1009 18:06:10.744935  194773 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:06:10.745201  194773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-140450/.minikube/bin
I1009 18:06:10.745905  194773 config.go:182] Loaded profile config "functional-686010": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1009 18:06:10.746012  194773 config.go:182] Loaded profile config "functional-686010": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1009 18:06:10.746510  194773 cli_runner.go:164] Run: docker container inspect functional-686010 --format={{.State.Status}}
I1009 18:06:10.769101  194773 ssh_runner.go:195] Run: systemctl --version
I1009 18:06:10.769214  194773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686010
I1009 18:06:10.793547  194773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/functional-686010/id_rsa Username:docker}
I1009 18:06:10.899320  194773 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-686010 image ls --format yaml --alsologtostderr:
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "62706233"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:6a20e3fc0b5bf5a35f1283613311747aaac316c040322fe409c3e4f2e4f3e19d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-686010
size: "991"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "17385568"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5
repoDigests:
- docker.io/library/nginx@sha256:7c1b9a91514d1eb5288d7cd6e91d9f451707911bfaea9307a3acbc811d4aa82e
repoTags:
- docker.io/library/nginx:alpine
size: "22596807"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"
- id: sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "22820214"
- id: sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "25963718"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-686010
size: "2372971"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "27061991"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-686010 image ls --format yaml --alsologtostderr:
I1009 18:06:10.499754  194599 out.go:360] Setting OutFile to fd 1 ...
I1009 18:06:10.500023  194599 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:06:10.500035  194599 out.go:374] Setting ErrFile to fd 2...
I1009 18:06:10.500039  194599 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:06:10.500325  194599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-140450/.minikube/bin
I1009 18:06:10.500958  194599 config.go:182] Loaded profile config "functional-686010": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1009 18:06:10.501049  194599 config.go:182] Loaded profile config "functional-686010": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1009 18:06:10.501611  194599 cli_runner.go:164] Run: docker container inspect functional-686010 --format={{.State.Status}}
I1009 18:06:10.523526  194599 ssh_runner.go:195] Run: systemctl --version
I1009 18:06:10.523623  194599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686010
I1009 18:06:10.546652  194599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/functional-686010/id_rsa Username:docker}
I1009 18:06:10.653548  194599 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686010 ssh pgrep buildkitd: exit status 1 (289.565944ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 image build -t localhost/my-image:functional-686010 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-686010 image build -t localhost/my-image:functional-686010 testdata/build --alsologtostderr: (3.875456433s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-686010 image build -t localhost/my-image:functional-686010 testdata/build --alsologtostderr:
I1009 18:06:11.014756  194933 out.go:360] Setting OutFile to fd 1 ...
I1009 18:06:11.015068  194933 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:06:11.015079  194933 out.go:374] Setting ErrFile to fd 2...
I1009 18:06:11.015086  194933 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:06:11.015316  194933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-140450/.minikube/bin
I1009 18:06:11.015980  194933 config.go:182] Loaded profile config "functional-686010": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1009 18:06:11.016913  194933 config.go:182] Loaded profile config "functional-686010": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1009 18:06:11.017387  194933 cli_runner.go:164] Run: docker container inspect functional-686010 --format={{.State.Status}}
I1009 18:06:11.037385  194933 ssh_runner.go:195] Run: systemctl --version
I1009 18:06:11.037458  194933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686010
I1009 18:06:11.057897  194933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/functional-686010/id_rsa Username:docker}
I1009 18:06:11.164220  194933 build_images.go:161] Building image from path: /tmp/build.231217134.tar
I1009 18:06:11.164313  194933 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1009 18:06:11.173802  194933 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.231217134.tar
I1009 18:06:11.178572  194933 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.231217134.tar: stat -c "%s %y" /var/lib/minikube/build/build.231217134.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.231217134.tar': No such file or directory
I1009 18:06:11.178601  194933 ssh_runner.go:362] scp /tmp/build.231217134.tar --> /var/lib/minikube/build/build.231217134.tar (3072 bytes)
I1009 18:06:11.198893  194933 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.231217134
I1009 18:06:11.207589  194933 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.231217134 -xf /var/lib/minikube/build/build.231217134.tar
I1009 18:06:11.218376  194933 containerd.go:394] Building image: /var/lib/minikube/build/build.231217134
I1009 18:06:11.218469  194933 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.231217134 --local dockerfile=/var/lib/minikube/build/build.231217134 --output type=image,name=localhost/my-image:functional-686010
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.6s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:d8bc0111cb28c28268b2bb56d4f774915c2ffbc2f0c8d48c37e55642625e2e9d done
#8 exporting config sha256:f7987aeb214311d116ae93e719ea29c69853de98aaff46228048284ec35ed2dd
#8 exporting config sha256:f7987aeb214311d116ae93e719ea29c69853de98aaff46228048284ec35ed2dd done
#8 naming to localhost/my-image:functional-686010 done
#8 DONE 0.1s
I1009 18:06:14.813151  194933 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.231217134 --local dockerfile=/var/lib/minikube/build/build.231217134 --output type=image,name=localhost/my-image:functional-686010: (3.594620853s)
I1009 18:06:14.813234  194933 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.231217134
I1009 18:06:14.822512  194933 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.231217134.tar
I1009 18:06:14.831219  194933 build_images.go:217] Built localhost/my-image:functional-686010 from /tmp/build.231217134.tar
I1009 18:06:14.831254  194933 build_images.go:133] succeeded building to: functional-686010
I1009 18:06:14.831260  194933 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.925019496s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-686010
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-686010 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-686010 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-ht6pn" [d64e0b02-3877-4ecc-9121-c1ea76c29b64] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-ht6pn" [d64e0b02-3877-4ecc-9121-c1ea76c29b64] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003778737s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 image load --daemon kicbase/echo-server:functional-686010 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 image load --daemon kicbase/echo-server:functional-686010 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-686010
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 image load --daemon kicbase/echo-server:functional-686010 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 image save kicbase/echo-server:functional-686010 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 image rm kicbase/echo-server:functional-686010 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-686010
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 image save --daemon kicbase/echo-server:functional-686010 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-686010
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "328.347995ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "50.502852ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "333.51157ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "56.427372ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-686010 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-686010 tunnel --alsologtostderr]
E1009 18:05:43.684085  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-686010 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 189245: os: process already finished
helpers_test.go:519: unable to terminate pid 188963: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-686010 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 service list -o json
functional_test.go:1504: Took "379.597761ms" to run "out/minikube-linux-amd64 -p functional-686010 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-686010 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-686010 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [d5367abe-d999-42fa-abad-399605de0cc4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [d5367abe-d999-42fa-abad-399605de0cc4] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.00390676s
I1009 18:05:54.108937  144094 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 service --namespace=default --https --url hello-node
I1009 18:05:44.245360  144094 detect.go:223] nested VM detected
functional_test.go:1532: found endpoint: https://192.168.49.2:32012
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32012
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (20.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-686010 /tmp/TestFunctionalparallelMountCmdany-port3919884974/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760033145201353686" to /tmp/TestFunctionalparallelMountCmdany-port3919884974/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760033145201353686" to /tmp/TestFunctionalparallelMountCmdany-port3919884974/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760033145201353686" to /tmp/TestFunctionalparallelMountCmdany-port3919884974/001/test-1760033145201353686
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686010 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (306.492576ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 18:05:45.508188  144094 retry.go:31] will retry after 467.697995ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  9 18:05 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  9 18:05 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  9 18:05 test-1760033145201353686
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh cat /mount-9p/test-1760033145201353686
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-686010 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [324c9e8a-b82f-4ddb-9b94-2e1a614ff4e4] Pending
helpers_test.go:352: "busybox-mount" [324c9e8a-b82f-4ddb-9b94-2e1a614ff4e4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [324c9e8a-b82f-4ddb-9b94-2e1a614ff4e4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [324c9e8a-b82f-4ddb-9b94-2e1a614ff4e4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 18.003517698s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-686010 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-686010 /tmp/TestFunctionalparallelMountCmdany-port3919884974/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (20.81s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-686010 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.133.128 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-686010 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-686010 /tmp/TestFunctionalparallelMountCmdspecific-port2631094719/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686010 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (359.063368ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 18:06:06.373924  144094 retry.go:31] will retry after 255.67435ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-686010 /tmp/TestFunctionalparallelMountCmdspecific-port2631094719/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686010 ssh "sudo umount -f /mount-9p": exit status 1 (339.030386ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-686010 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-686010 /tmp/TestFunctionalparallelMountCmdspecific-port2631094719/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-686010 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1987669749/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-686010 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1987669749/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-686010 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1987669749/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-686010 ssh "findmnt -T" /mount1: exit status 1 (397.674781ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 18:06:08.227168  144094 retry.go:31] will retry after 441.698678ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-686010 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-686010 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-686010 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1987669749/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-686010 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1987669749/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-686010 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1987669749/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.77s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-686010
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-686010
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-686010
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (159.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1009 18:06:24.647026  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:07:46.569421  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-264518 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m39.031647508s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (159.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-264518 kubectl -- rollout status deployment/busybox: (3.975496034s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 kubectl -- exec busybox-7b57f96db7-k55jg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 kubectl -- exec busybox-7b57f96db7-nnd5k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 kubectl -- exec busybox-7b57f96db7-x9grl -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 kubectl -- exec busybox-7b57f96db7-k55jg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 kubectl -- exec busybox-7b57f96db7-nnd5k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 kubectl -- exec busybox-7b57f96db7-x9grl -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 kubectl -- exec busybox-7b57f96db7-k55jg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 kubectl -- exec busybox-7b57f96db7-nnd5k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 kubectl -- exec busybox-7b57f96db7-x9grl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 kubectl -- exec busybox-7b57f96db7-k55jg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 kubectl -- exec busybox-7b57f96db7-k55jg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 kubectl -- exec busybox-7b57f96db7-nnd5k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 kubectl -- exec busybox-7b57f96db7-nnd5k -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 kubectl -- exec busybox-7b57f96db7-x9grl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 kubectl -- exec busybox-7b57f96db7-x9grl -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-264518 node add --alsologtostderr -v 5: (22.724063178s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-264518 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 cp testdata/cp-test.txt ha-264518:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 cp ha-264518:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1748172181/001/cp-test_ha-264518.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 cp ha-264518:/home/docker/cp-test.txt ha-264518-m02:/home/docker/cp-test_ha-264518_ha-264518-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m02 "sudo cat /home/docker/cp-test_ha-264518_ha-264518-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 cp ha-264518:/home/docker/cp-test.txt ha-264518-m03:/home/docker/cp-test_ha-264518_ha-264518-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m03 "sudo cat /home/docker/cp-test_ha-264518_ha-264518-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 cp ha-264518:/home/docker/cp-test.txt ha-264518-m04:/home/docker/cp-test_ha-264518_ha-264518-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m04 "sudo cat /home/docker/cp-test_ha-264518_ha-264518-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 cp testdata/cp-test.txt ha-264518-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 cp ha-264518-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1748172181/001/cp-test_ha-264518-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 cp ha-264518-m02:/home/docker/cp-test.txt ha-264518:/home/docker/cp-test_ha-264518-m02_ha-264518.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518 "sudo cat /home/docker/cp-test_ha-264518-m02_ha-264518.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 cp ha-264518-m02:/home/docker/cp-test.txt ha-264518-m03:/home/docker/cp-test_ha-264518-m02_ha-264518-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m03 "sudo cat /home/docker/cp-test_ha-264518-m02_ha-264518-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 cp ha-264518-m02:/home/docker/cp-test.txt ha-264518-m04:/home/docker/cp-test_ha-264518-m02_ha-264518-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m04 "sudo cat /home/docker/cp-test_ha-264518-m02_ha-264518-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 cp testdata/cp-test.txt ha-264518-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 cp ha-264518-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1748172181/001/cp-test_ha-264518-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 cp ha-264518-m03:/home/docker/cp-test.txt ha-264518:/home/docker/cp-test_ha-264518-m03_ha-264518.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518 "sudo cat /home/docker/cp-test_ha-264518-m03_ha-264518.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 cp ha-264518-m03:/home/docker/cp-test.txt ha-264518-m02:/home/docker/cp-test_ha-264518-m03_ha-264518-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m02 "sudo cat /home/docker/cp-test_ha-264518-m03_ha-264518-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 cp ha-264518-m03:/home/docker/cp-test.txt ha-264518-m04:/home/docker/cp-test_ha-264518-m03_ha-264518-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m04 "sudo cat /home/docker/cp-test_ha-264518-m03_ha-264518-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 cp testdata/cp-test.txt ha-264518-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 cp ha-264518-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1748172181/001/cp-test_ha-264518-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 cp ha-264518-m04:/home/docker/cp-test.txt ha-264518:/home/docker/cp-test_ha-264518-m04_ha-264518.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518 "sudo cat /home/docker/cp-test_ha-264518-m04_ha-264518.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 cp ha-264518-m04:/home/docker/cp-test.txt ha-264518-m02:/home/docker/cp-test_ha-264518-m04_ha-264518-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m02 "sudo cat /home/docker/cp-test_ha-264518-m04_ha-264518-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 cp ha-264518-m04:/home/docker/cp-test.txt ha-264518-m03:/home/docker/cp-test_ha-264518-m04_ha-264518-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 ssh -n ha-264518-m03 "sudo cat /home/docker/cp-test_ha-264518-m04_ha-264518-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-264518 node stop m02 --alsologtostderr -v 5: (11.922520294s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-264518 status --alsologtostderr -v 5: exit status 7 (708.398252ms)

                                                
                                                
-- stdout --
	ha-264518
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-264518-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-264518-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-264518-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:10:01.719831  216340 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:10:01.720153  216340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:10:01.720163  216340 out.go:374] Setting ErrFile to fd 2...
	I1009 18:10:01.720169  216340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:10:01.720374  216340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-140450/.minikube/bin
	I1009 18:10:01.720619  216340 out.go:368] Setting JSON to false
	I1009 18:10:01.720659  216340 mustload.go:65] Loading cluster: ha-264518
	I1009 18:10:01.720760  216340 notify.go:220] Checking for updates...
	I1009 18:10:01.721028  216340 config.go:182] Loaded profile config "ha-264518": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1009 18:10:01.721048  216340 status.go:174] checking status of ha-264518 ...
	I1009 18:10:01.721528  216340 cli_runner.go:164] Run: docker container inspect ha-264518 --format={{.State.Status}}
	I1009 18:10:01.742929  216340 status.go:371] ha-264518 host status = "Running" (err=<nil>)
	I1009 18:10:01.742956  216340 host.go:66] Checking if "ha-264518" exists ...
	I1009 18:10:01.743251  216340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-264518
	I1009 18:10:01.762667  216340 host.go:66] Checking if "ha-264518" exists ...
	I1009 18:10:01.763003  216340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:10:01.763064  216340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-264518
	I1009 18:10:01.780964  216340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/ha-264518/id_rsa Username:docker}
	I1009 18:10:01.882993  216340 ssh_runner.go:195] Run: systemctl --version
	I1009 18:10:01.889707  216340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:10:01.902734  216340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:10:01.961864  216340 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-09 18:10:01.950800964 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:10:01.962480  216340 kubeconfig.go:125] found "ha-264518" server: "https://192.168.49.254:8443"
	I1009 18:10:01.962517  216340 api_server.go:166] Checking apiserver status ...
	I1009 18:10:01.962562  216340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:10:01.975034  216340 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1391/cgroup
	W1009 18:10:01.983402  216340 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1391/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:10:01.983449  216340 ssh_runner.go:195] Run: ls
	I1009 18:10:01.986952  216340 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1009 18:10:01.991865  216340 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1009 18:10:01.991890  216340 status.go:463] ha-264518 apiserver status = Running (err=<nil>)
	I1009 18:10:01.991903  216340 status.go:176] ha-264518 status: &{Name:ha-264518 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:10:01.991922  216340 status.go:174] checking status of ha-264518-m02 ...
	I1009 18:10:01.992240  216340 cli_runner.go:164] Run: docker container inspect ha-264518-m02 --format={{.State.Status}}
	I1009 18:10:02.011011  216340 status.go:371] ha-264518-m02 host status = "Stopped" (err=<nil>)
	I1009 18:10:02.011033  216340 status.go:384] host is not running, skipping remaining checks
	I1009 18:10:02.011041  216340 status.go:176] ha-264518-m02 status: &{Name:ha-264518-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:10:02.011064  216340 status.go:174] checking status of ha-264518-m03 ...
	I1009 18:10:02.011347  216340 cli_runner.go:164] Run: docker container inspect ha-264518-m03 --format={{.State.Status}}
	I1009 18:10:02.028935  216340 status.go:371] ha-264518-m03 host status = "Running" (err=<nil>)
	I1009 18:10:02.028969  216340 host.go:66] Checking if "ha-264518-m03" exists ...
	I1009 18:10:02.029296  216340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-264518-m03
	I1009 18:10:02.046364  216340 host.go:66] Checking if "ha-264518-m03" exists ...
	I1009 18:10:02.046649  216340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:10:02.046696  216340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-264518-m03
	I1009 18:10:02.064789  216340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/ha-264518-m03/id_rsa Username:docker}
	I1009 18:10:02.165531  216340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:10:02.178456  216340 kubeconfig.go:125] found "ha-264518" server: "https://192.168.49.254:8443"
	I1009 18:10:02.178485  216340 api_server.go:166] Checking apiserver status ...
	I1009 18:10:02.178517  216340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:10:02.190196  216340 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1315/cgroup
	W1009 18:10:02.198443  216340 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1315/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:10:02.198492  216340 ssh_runner.go:195] Run: ls
	I1009 18:10:02.202212  216340 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1009 18:10:02.207928  216340 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1009 18:10:02.207954  216340 status.go:463] ha-264518-m03 apiserver status = Running (err=<nil>)
	I1009 18:10:02.207966  216340 status.go:176] ha-264518-m03 status: &{Name:ha-264518-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:10:02.207990  216340 status.go:174] checking status of ha-264518-m04 ...
	I1009 18:10:02.208326  216340 cli_runner.go:164] Run: docker container inspect ha-264518-m04 --format={{.State.Status}}
	I1009 18:10:02.228483  216340 status.go:371] ha-264518-m04 host status = "Running" (err=<nil>)
	I1009 18:10:02.228519  216340 host.go:66] Checking if "ha-264518-m04" exists ...
	I1009 18:10:02.228788  216340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-264518-m04
	I1009 18:10:02.247176  216340 host.go:66] Checking if "ha-264518-m04" exists ...
	I1009 18:10:02.247482  216340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:10:02.247538  216340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-264518-m04
	I1009 18:10:02.264278  216340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/ha-264518-m04/id_rsa Username:docker}
	I1009 18:10:02.363509  216340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:10:02.375916  216340 status.go:176] ha-264518-m04 status: &{Name:ha-264518-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1009 18:10:02.707099  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-264518 node start m02 --alsologtostderr -v 5: (8.164224801s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (94.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 stop --alsologtostderr -v 5
E1009 18:10:30.411456  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:35.383325  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/functional-686010/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:35.389864  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/functional-686010/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:35.401323  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/functional-686010/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:35.422780  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/functional-686010/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:35.466337  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/functional-686010/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:35.547842  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/functional-686010/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:35.709940  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/functional-686010/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:36.031800  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/functional-686010/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:36.673417  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/functional-686010/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:37.956280  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/functional-686010/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:40.517861  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/functional-686010/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:45.639597  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/functional-686010/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-264518 stop --alsologtostderr -v 5: (36.929942814s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 start --wait true --alsologtostderr -v 5
E1009 18:10:55.881908  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/functional-686010/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:11:16.364542  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/functional-686010/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-264518 start --wait true --alsologtostderr -v 5: (57.446845841s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (94.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-264518 node delete m03 --alsologtostderr -v 5: (8.328383403s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1009 18:11:57.326348  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/functional-686010/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-264518 stop --alsologtostderr -v 5: (35.744148721s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-264518 status --alsologtostderr -v 5: exit status 7 (103.811514ms)

                                                
                                                
-- stdout --
	ha-264518
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-264518-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-264518-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:12:33.212409  232803 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:12:33.212637  232803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:12:33.212645  232803 out.go:374] Setting ErrFile to fd 2...
	I1009 18:12:33.212649  232803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:12:33.213043  232803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-140450/.minikube/bin
	I1009 18:12:33.213251  232803 out.go:368] Setting JSON to false
	I1009 18:12:33.213281  232803 mustload.go:65] Loading cluster: ha-264518
	I1009 18:12:33.213390  232803 notify.go:220] Checking for updates...
	I1009 18:12:33.213680  232803 config.go:182] Loaded profile config "ha-264518": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1009 18:12:33.213699  232803 status.go:174] checking status of ha-264518 ...
	I1009 18:12:33.214066  232803 cli_runner.go:164] Run: docker container inspect ha-264518 --format={{.State.Status}}
	I1009 18:12:33.232885  232803 status.go:371] ha-264518 host status = "Stopped" (err=<nil>)
	I1009 18:12:33.232912  232803 status.go:384] host is not running, skipping remaining checks
	I1009 18:12:33.232922  232803 status.go:176] ha-264518 status: &{Name:ha-264518 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:12:33.232954  232803 status.go:174] checking status of ha-264518-m02 ...
	I1009 18:12:33.233334  232803 cli_runner.go:164] Run: docker container inspect ha-264518-m02 --format={{.State.Status}}
	I1009 18:12:33.249936  232803 status.go:371] ha-264518-m02 host status = "Stopped" (err=<nil>)
	I1009 18:12:33.249983  232803 status.go:384] host is not running, skipping remaining checks
	I1009 18:12:33.249993  232803 status.go:176] ha-264518-m02 status: &{Name:ha-264518-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:12:33.250028  232803 status.go:174] checking status of ha-264518-m04 ...
	I1009 18:12:33.250318  232803 cli_runner.go:164] Run: docker container inspect ha-264518-m04 --format={{.State.Status}}
	I1009 18:12:33.266457  232803 status.go:371] ha-264518-m04 host status = "Stopped" (err=<nil>)
	I1009 18:12:33.266477  232803 status.go:384] host is not running, skipping remaining checks
	I1009 18:12:33.266484  232803 status.go:176] ha-264518-m04 status: &{Name:ha-264518-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (53.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1009 18:13:19.248413  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/functional-686010/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-264518 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (52.561416032s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (53.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-264518 node add --control-plane --alsologtostderr -v 5: (43.188169693s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-264518 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.64s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-774393 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-774393 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (38.640920054s)
--- PASS: TestJSONOutput/start/Command (38.64s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-774393 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-774393 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.72s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-774393 --output=json --user=testUser
E1009 18:15:02.708030  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-774393 --output=json --user=testUser: (5.721782882s)
--- PASS: TestJSONOutput/stop/Command (5.72s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-878269 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-878269 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (66.120833ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5dc4c2bd-467d-4c3d-b03b-0484ef50d444","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-878269] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7b132be8-da46-40c4-a8a8-90b0791ebd82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21139"}}
	{"specversion":"1.0","id":"1e4fcc16-6850-4e4b-8074-76b17ba92951","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ba4456bd-c0e6-4536-8366-4e88b620f71d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21139-140450/kubeconfig"}}
	{"specversion":"1.0","id":"86b15d0f-edfb-4237-9fe7-32698a95cb65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-140450/.minikube"}}
	{"specversion":"1.0","id":"e2f463d0-7f94-4bd7-80db-c81581ba4be3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"fb6592ea-80b7-400a-9112-cd1c471d90ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5747d59d-09f1-4f76-9394-1cdc62953f46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-878269" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-878269
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.15s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-751282 --network=
E1009 18:15:35.381671  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/functional-686010/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-751282 --network=: (32.028915157s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-751282" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-751282
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-751282: (2.096890573s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.15s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.43s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-990456 --network=bridge
E1009 18:16:03.090189  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/functional-686010/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-990456 --network=bridge: (21.474595619s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-990456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-990456
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-990456: (1.935001185s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.43s)

                                                
                                    
x
+
TestKicExistingNetwork (23.4s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1009 18:16:08.844012  144094 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1009 18:16:08.860065  144094 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1009 18:16:08.860151  144094 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1009 18:16:08.860183  144094 cli_runner.go:164] Run: docker network inspect existing-network
W1009 18:16:08.875988  144094 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1009 18:16:08.876019  144094 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1009 18:16:08.876044  144094 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1009 18:16:08.876195  144094 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1009 18:16:08.892264  144094 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a776d4a7d86a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:a7:10:79:cc:07} reservation:<nil>}
I1009 18:16:08.892646  144094 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022e2d70}
I1009 18:16:08.892673  144094 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1009 18:16:08.892728  144094 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1009 18:16:08.950379  144094 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-398240 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-398240 --network=existing-network: (21.311493731s)
helpers_test.go:175: Cleaning up "existing-network-398240" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-398240
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-398240: (1.948382203s)
I1009 18:16:32.227751  144094 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.40s)

                                                
                                    
x
+
TestKicCustomSubnet (25.57s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-335270 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-335270 --subnet=192.168.60.0/24: (23.444935779s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-335270 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-335270" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-335270
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-335270: (2.102705002s)
--- PASS: TestKicCustomSubnet (25.57s)

                                                
                                    
x
+
TestKicStaticIP (25.52s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-478542 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-478542 --static-ip=192.168.200.200: (23.210413883s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-478542 ip
helpers_test.go:175: Cleaning up "static-ip-478542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-478542
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-478542: (2.169198827s)
--- PASS: TestKicStaticIP (25.52s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (51.1s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-318089 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-318089 --driver=docker  --container-runtime=containerd: (21.223258312s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-321753 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-321753 --driver=docker  --container-runtime=containerd: (24.353003885s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-318089
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-321753
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-321753" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-321753
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-321753: (1.95265853s)
helpers_test.go:175: Cleaning up "first-318089" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-318089
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-318089: (2.338310133s)
--- PASS: TestMinikubeProfile (51.10s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.84s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-288365 --memory=3072 --mount-string /tmp/TestMountStartserial3219602776/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-288365 --memory=3072 --mount-string /tmp/TestMountStartserial3219602776/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.836011963s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-288365 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-301624 --memory=3072 --mount-string /tmp/TestMountStartserial3219602776/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-301624 --memory=3072 --mount-string /tmp/TestMountStartserial3219602776/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.367051734s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-301624 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-288365 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-288365 --alsologtostderr -v=5: (1.65937313s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-301624 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-301624
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-301624: (1.196255794s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.03s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-301624
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-301624: (7.026269021s)
--- PASS: TestMountStart/serial/RestartStopped (8.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-301624 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-043532 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-043532 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m4.14126918s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.62s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-043532 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-043532 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-043532 -- rollout status deployment/busybox: (3.938729011s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-043532 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-043532 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-043532 -- exec busybox-7b57f96db7-54wbq -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-043532 -- exec busybox-7b57f96db7-kj6sb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-043532 -- exec busybox-7b57f96db7-54wbq -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-043532 -- exec busybox-7b57f96db7-kj6sb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-043532 -- exec busybox-7b57f96db7-54wbq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-043532 -- exec busybox-7b57f96db7-kj6sb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.32s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-043532 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-043532 -- exec busybox-7b57f96db7-54wbq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-043532 -- exec busybox-7b57f96db7-54wbq -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-043532 -- exec busybox-7b57f96db7-kj6sb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-043532 -- exec busybox-7b57f96db7-kj6sb -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-043532 -v=5 --alsologtostderr
E1009 18:20:02.707388  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-043532 -v=5 --alsologtostderr: (23.469251863s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (24.12s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-043532 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 cp testdata/cp-test.txt multinode-043532:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 ssh -n multinode-043532 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 cp multinode-043532:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile337696000/001/cp-test_multinode-043532.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 ssh -n multinode-043532 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 cp multinode-043532:/home/docker/cp-test.txt multinode-043532-m02:/home/docker/cp-test_multinode-043532_multinode-043532-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 ssh -n multinode-043532 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 ssh -n multinode-043532-m02 "sudo cat /home/docker/cp-test_multinode-043532_multinode-043532-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 cp multinode-043532:/home/docker/cp-test.txt multinode-043532-m03:/home/docker/cp-test_multinode-043532_multinode-043532-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 ssh -n multinode-043532 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 ssh -n multinode-043532-m03 "sudo cat /home/docker/cp-test_multinode-043532_multinode-043532-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 cp testdata/cp-test.txt multinode-043532-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 ssh -n multinode-043532-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 cp multinode-043532-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile337696000/001/cp-test_multinode-043532-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 ssh -n multinode-043532-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 cp multinode-043532-m02:/home/docker/cp-test.txt multinode-043532:/home/docker/cp-test_multinode-043532-m02_multinode-043532.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 ssh -n multinode-043532-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 ssh -n multinode-043532 "sudo cat /home/docker/cp-test_multinode-043532-m02_multinode-043532.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 cp multinode-043532-m02:/home/docker/cp-test.txt multinode-043532-m03:/home/docker/cp-test_multinode-043532-m02_multinode-043532-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 ssh -n multinode-043532-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 ssh -n multinode-043532-m03 "sudo cat /home/docker/cp-test_multinode-043532-m02_multinode-043532-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 cp testdata/cp-test.txt multinode-043532-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 ssh -n multinode-043532-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 cp multinode-043532-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile337696000/001/cp-test_multinode-043532-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 ssh -n multinode-043532-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 cp multinode-043532-m03:/home/docker/cp-test.txt multinode-043532:/home/docker/cp-test_multinode-043532-m03_multinode-043532.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 ssh -n multinode-043532-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 ssh -n multinode-043532 "sudo cat /home/docker/cp-test_multinode-043532-m03_multinode-043532.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 cp multinode-043532-m03:/home/docker/cp-test.txt multinode-043532-m02:/home/docker/cp-test_multinode-043532-m03_multinode-043532-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 ssh -n multinode-043532-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 ssh -n multinode-043532-m02 "sudo cat /home/docker/cp-test_multinode-043532-m03_multinode-043532-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-043532 node stop m03: (1.234419878s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-043532 status: exit status 7 (500.606072ms)

                                                
                                                
-- stdout --
	multinode-043532
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-043532-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-043532-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-043532 status --alsologtostderr: exit status 7 (509.350982ms)

                                                
                                                
-- stdout --
	multinode-043532
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-043532-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-043532-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:20:25.712823  295637 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:20:25.713107  295637 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:20:25.713131  295637 out.go:374] Setting ErrFile to fd 2...
	I1009 18:20:25.713135  295637 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:20:25.713347  295637 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-140450/.minikube/bin
	I1009 18:20:25.713511  295637 out.go:368] Setting JSON to false
	I1009 18:20:25.713542  295637 mustload.go:65] Loading cluster: multinode-043532
	I1009 18:20:25.713690  295637 notify.go:220] Checking for updates...
	I1009 18:20:25.714048  295637 config.go:182] Loaded profile config "multinode-043532": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1009 18:20:25.714074  295637 status.go:174] checking status of multinode-043532 ...
	I1009 18:20:25.714663  295637 cli_runner.go:164] Run: docker container inspect multinode-043532 --format={{.State.Status}}
	I1009 18:20:25.733127  295637 status.go:371] multinode-043532 host status = "Running" (err=<nil>)
	I1009 18:20:25.733156  295637 host.go:66] Checking if "multinode-043532" exists ...
	I1009 18:20:25.733492  295637 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-043532
	I1009 18:20:25.752306  295637 host.go:66] Checking if "multinode-043532" exists ...
	I1009 18:20:25.752604  295637 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:20:25.752659  295637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-043532
	I1009 18:20:25.770663  295637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/multinode-043532/id_rsa Username:docker}
	I1009 18:20:25.871578  295637 ssh_runner.go:195] Run: systemctl --version
	I1009 18:20:25.878058  295637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:20:25.890686  295637 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:20:25.951369  295637 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-09 18:20:25.940826135 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:20:25.951902  295637 kubeconfig.go:125] found "multinode-043532" server: "https://192.168.67.2:8443"
	I1009 18:20:25.951937  295637 api_server.go:166] Checking apiserver status ...
	I1009 18:20:25.951979  295637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:20:25.964214  295637 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1327/cgroup
	W1009 18:20:25.972703  295637 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1327/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:20:25.972753  295637 ssh_runner.go:195] Run: ls
	I1009 18:20:25.976597  295637 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1009 18:20:25.980706  295637 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1009 18:20:25.980729  295637 status.go:463] multinode-043532 apiserver status = Running (err=<nil>)
	I1009 18:20:25.980746  295637 status.go:176] multinode-043532 status: &{Name:multinode-043532 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:20:25.980764  295637 status.go:174] checking status of multinode-043532-m02 ...
	I1009 18:20:25.981001  295637 cli_runner.go:164] Run: docker container inspect multinode-043532-m02 --format={{.State.Status}}
	I1009 18:20:25.999836  295637 status.go:371] multinode-043532-m02 host status = "Running" (err=<nil>)
	I1009 18:20:25.999873  295637 host.go:66] Checking if "multinode-043532-m02" exists ...
	I1009 18:20:26.000329  295637 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-043532-m02
	I1009 18:20:26.017667  295637 host.go:66] Checking if "multinode-043532-m02" exists ...
	I1009 18:20:26.017991  295637 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:20:26.018042  295637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-043532-m02
	I1009 18:20:26.035615  295637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21139-140450/.minikube/machines/multinode-043532-m02/id_rsa Username:docker}
	I1009 18:20:26.136578  295637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:20:26.150759  295637 status.go:176] multinode-043532-m02 status: &{Name:multinode-043532-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:20:26.150796  295637 status.go:174] checking status of multinode-043532-m03 ...
	I1009 18:20:26.151085  295637 cli_runner.go:164] Run: docker container inspect multinode-043532-m03 --format={{.State.Status}}
	I1009 18:20:26.170499  295637 status.go:371] multinode-043532-m03 host status = "Stopped" (err=<nil>)
	I1009 18:20:26.170527  295637 status.go:384] host is not running, skipping remaining checks
	I1009 18:20:26.170535  295637 status.go:176] multinode-043532-m03 status: &{Name:multinode-043532-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-043532 node start m03 -v=5 --alsologtostderr: (6.424897047s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (68.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-043532
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-043532
E1009 18:20:35.382207  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/functional-686010/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-043532: (24.86195984s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-043532 --wait=true -v=5 --alsologtostderr
E1009 18:21:25.773354  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-043532 --wait=true -v=5 --alsologtostderr: (43.677966408s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-043532
--- PASS: TestMultiNode/serial/RestartKeepsNodes (68.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-043532 node delete m03: (4.551265813s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-043532 stop: (23.671685036s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-043532 status: exit status 7 (86.95428ms)

                                                
                                                
-- stdout --
	multinode-043532
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-043532-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-043532 status --alsologtostderr: exit status 7 (86.48604ms)

                                                
                                                
-- stdout --
	multinode-043532
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-043532-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:22:10.922062  305364 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:22:10.922188  305364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:22:10.922202  305364 out.go:374] Setting ErrFile to fd 2...
	I1009 18:22:10.922208  305364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:22:10.922402  305364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-140450/.minikube/bin
	I1009 18:22:10.922575  305364 out.go:368] Setting JSON to false
	I1009 18:22:10.922606  305364 mustload.go:65] Loading cluster: multinode-043532
	I1009 18:22:10.922722  305364 notify.go:220] Checking for updates...
	I1009 18:22:10.922991  305364 config.go:182] Loaded profile config "multinode-043532": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1009 18:22:10.923005  305364 status.go:174] checking status of multinode-043532 ...
	I1009 18:22:10.923462  305364 cli_runner.go:164] Run: docker container inspect multinode-043532 --format={{.State.Status}}
	I1009 18:22:10.943962  305364 status.go:371] multinode-043532 host status = "Stopped" (err=<nil>)
	I1009 18:22:10.943981  305364 status.go:384] host is not running, skipping remaining checks
	I1009 18:22:10.943987  305364 status.go:176] multinode-043532 status: &{Name:multinode-043532 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:22:10.944014  305364 status.go:174] checking status of multinode-043532-m02 ...
	I1009 18:22:10.944287  305364 cli_runner.go:164] Run: docker container inspect multinode-043532-m02 --format={{.State.Status}}
	I1009 18:22:10.961347  305364 status.go:371] multinode-043532-m02 host status = "Stopped" (err=<nil>)
	I1009 18:22:10.961389  305364 status.go:384] host is not running, skipping remaining checks
	I1009 18:22:10.961402  305364 status.go:176] multinode-043532-m02 status: &{Name:multinode-043532-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-043532 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-043532 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (47.73071563s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-043532 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.32s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-043532
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-043532-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-043532-m02 --driver=docker  --container-runtime=containerd: exit status 14 (64.661932ms)

                                                
                                                
-- stdout --
	* [multinode-043532-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-140450/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-140450/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-043532-m02' is duplicated with machine name 'multinode-043532-m02' in profile 'multinode-043532'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-043532-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-043532-m03 --driver=docker  --container-runtime=containerd: (20.94450118s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-043532
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-043532: exit status 80 (286.567757ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-043532 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-043532-m03 already exists in multinode-043532-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-043532-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-043532-m03: (1.957481569s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.30s)

                                                
                                    
x
+
TestPreload (116.68s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-449765 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-449765 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (48.400366837s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-449765 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-449765 image pull gcr.io/k8s-minikube/busybox: (2.814821374s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-449765
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-449765: (5.62877668s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-449765 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1009 18:25:02.707102  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-449765 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (57.148031049s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-449765 image list
helpers_test.go:175: Cleaning up "test-preload-449765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-449765
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-449765: (2.459855842s)
--- PASS: TestPreload (116.68s)

                                                
                                    
x
+
TestScheduledStopUnix (99.99s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-576077 --memory=3072 --driver=docker  --container-runtime=containerd
E1009 18:25:35.381958  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/functional-686010/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-576077 --memory=3072 --driver=docker  --container-runtime=containerd: (24.261928601s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-576077 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-576077 -n scheduled-stop-576077
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-576077 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1009 18:25:48.024983  144094 retry.go:31] will retry after 110.373µs: open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/scheduled-stop-576077/pid: no such file or directory
I1009 18:25:48.026154  144094 retry.go:31] will retry after 83.064µs: open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/scheduled-stop-576077/pid: no such file or directory
I1009 18:25:48.027297  144094 retry.go:31] will retry after 284.019µs: open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/scheduled-stop-576077/pid: no such file or directory
I1009 18:25:48.028428  144094 retry.go:31] will retry after 496.663µs: open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/scheduled-stop-576077/pid: no such file or directory
I1009 18:25:48.029556  144094 retry.go:31] will retry after 457.68µs: open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/scheduled-stop-576077/pid: no such file or directory
I1009 18:25:48.030672  144094 retry.go:31] will retry after 764.499µs: open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/scheduled-stop-576077/pid: no such file or directory
I1009 18:25:48.031794  144094 retry.go:31] will retry after 574.486µs: open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/scheduled-stop-576077/pid: no such file or directory
I1009 18:25:48.032912  144094 retry.go:31] will retry after 1.873482ms: open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/scheduled-stop-576077/pid: no such file or directory
I1009 18:25:48.035114  144094 retry.go:31] will retry after 1.57586ms: open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/scheduled-stop-576077/pid: no such file or directory
I1009 18:25:48.037333  144094 retry.go:31] will retry after 2.172957ms: open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/scheduled-stop-576077/pid: no such file or directory
I1009 18:25:48.040639  144094 retry.go:31] will retry after 7.817552ms: open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/scheduled-stop-576077/pid: no such file or directory
I1009 18:25:48.049159  144094 retry.go:31] will retry after 9.732819ms: open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/scheduled-stop-576077/pid: no such file or directory
I1009 18:25:48.059334  144094 retry.go:31] will retry after 11.553962ms: open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/scheduled-stop-576077/pid: no such file or directory
I1009 18:25:48.071508  144094 retry.go:31] will retry after 13.548567ms: open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/scheduled-stop-576077/pid: no such file or directory
I1009 18:25:48.085733  144094 retry.go:31] will retry after 34.125761ms: open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/scheduled-stop-576077/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-576077 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-576077 -n scheduled-stop-576077
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-576077
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-576077 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1009 18:26:58.453807  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/functional-686010/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-576077
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-576077: exit status 7 (67.038277ms)

                                                
                                                
-- stdout --
	scheduled-stop-576077
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-576077 -n scheduled-stop-576077
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-576077 -n scheduled-stop-576077: exit status 7 (68.001367ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-576077" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-576077
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-576077: (4.348187417s)
--- PASS: TestScheduledStopUnix (99.99s)

                                                
                                    
x
+
TestInsufficientStorage (9.66s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-207349 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-207349 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.226417356s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"258796eb-c29c-400f-a2fc-60b14c541fec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-207349] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e3de8795-19d0-47ac-bfaa-1f8f99678b6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21139"}}
	{"specversion":"1.0","id":"b3c954fe-7e75-44ba-b71b-3dae8f12fac5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7345ee56-e0b1-4a4d-8a97-b8a6de09ff4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21139-140450/kubeconfig"}}
	{"specversion":"1.0","id":"1c69191d-ffed-49b0-a2f0-1b0de31f7aba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-140450/.minikube"}}
	{"specversion":"1.0","id":"444c725d-3c3d-4e20-9cd4-2331550d8d5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8988aef9-9371-41b5-924c-239f023ea597","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f8416947-bea7-4e92-9534-ed82336ccfd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"64beb658-0a61-4ebf-b282-d61fdc605761","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ad9f5ef2-f072-48a6-9034-9c5ed73da6e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4d722c54-a34f-4025-ac34-3e9105719c13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e37fd99e-64c0-4a13-9e88-352c63ada9cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-207349\" primary control-plane node in \"insufficient-storage-207349\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"91ede3a7-6c31-4889-8118-ec4d804d5282","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759745255-21703 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d65759d9-d43d-4466-8c26-22ab8a18bb06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6df8e19f-db72-4e6e-a41e-fc0145a7c0b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-207349 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-207349 --output=json --layout=cluster: exit status 7 (284.023164ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-207349","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-207349","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:27:10.834558  327187 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-207349" does not appear in /home/jenkins/minikube-integration/21139-140450/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-207349 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-207349 --output=json --layout=cluster: exit status 7 (279.890648ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-207349","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-207349","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:27:11.115074  327297 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-207349" does not appear in /home/jenkins/minikube-integration/21139-140450/kubeconfig
	E1009 18:27:11.125184  327297 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/insufficient-storage-207349/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-207349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-207349
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-207349: (1.866070962s)
--- PASS: TestInsufficientStorage (9.66s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (51.77s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2822982705 start -p running-upgrade-863692 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1009 18:30:02.707013  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2822982705 start -p running-upgrade-863692 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (23.297954977s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-863692 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-863692 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (23.23305182s)
helpers_test.go:175: Cleaning up "running-upgrade-863692" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-863692
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-863692: (2.001933942s)
--- PASS: TestRunningBinaryUpgrade (51.77s)

                                                
                                    
x
+
TestKubernetesUpgrade (143.87s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-701596 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-701596 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (25.20245011s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-701596
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-701596: (1.880540201s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-701596 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-701596 status --format={{.Host}}: exit status 7 (67.053408ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-701596 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-701596 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m44.883607293s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-701596 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-701596 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-701596 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (84.64544ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-701596] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-140450/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-140450/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-701596
	    minikube start -p kubernetes-upgrade-701596 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7015962 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-701596 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-701596 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-701596 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (9.452107676s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-701596" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-701596
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-701596: (2.215955983s)
--- PASS: TestKubernetesUpgrade (143.87s)

                                                
                                    
x
+
TestMissingContainerUpgrade (143.31s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2659020205 start -p missing-upgrade-552528 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2659020205 start -p missing-upgrade-552528 --memory=3072 --driver=docker  --container-runtime=containerd: (43.739930552s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-552528
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-552528: (1.678175274s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-552528
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-552528 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-552528 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m31.58459123s)
helpers_test.go:175: Cleaning up "missing-upgrade-552528" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-552528
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-552528: (2.012497604s)
--- PASS: TestMissingContainerUpgrade (143.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:116: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-847951 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-847951 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (87.89767ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-847951] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-140450/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-140450/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (33.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-847951 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-847951 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (33.14815908s)
no_kubernetes_test.go:233: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-847951 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (33.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (7.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-265552 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-265552 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (931.714625ms)

                                                
                                                
-- stdout --
	* [false-265552] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-140450/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-140450/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:27:17.642029  329664 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:27:17.642178  329664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:27:17.642190  329664 out.go:374] Setting ErrFile to fd 2...
	I1009 18:27:17.642196  329664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:27:17.642404  329664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-140450/.minikube/bin
	I1009 18:27:17.642942  329664 out.go:368] Setting JSON to false
	I1009 18:27:17.643959  329664 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4178,"bootTime":1760030260,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:27:17.644044  329664 start.go:141] virtualization: kvm guest
	I1009 18:27:17.780665  329664 out.go:179] * [false-265552] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:27:17.893703  329664 notify.go:220] Checking for updates...
	I1009 18:27:17.894249  329664 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:27:18.085163  329664 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:27:18.276928  329664 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-140450/kubeconfig
	I1009 18:27:18.293111  329664 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-140450/.minikube
	I1009 18:27:18.296524  329664 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:27:18.302852  329664 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:27:18.308899  329664 config.go:182] Loaded profile config "NoKubernetes-847951": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1009 18:27:18.309001  329664 config.go:182] Loaded profile config "force-systemd-env-855890": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1009 18:27:18.309087  329664 config.go:182] Loaded profile config "offline-containerd-818450": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1009 18:27:18.309207  329664 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:27:18.331325  329664 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:27:18.331418  329664 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:27:18.390559  329664 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:false NGoroutines:63 SystemTime:2025-10-09 18:27:18.380584906 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:27:18.390654  329664 docker.go:318] overlay module found
	I1009 18:27:18.451982  329664 out.go:179] * Using the docker driver based on user configuration
	I1009 18:27:18.486546  329664 start.go:305] selected driver: docker
	I1009 18:27:18.486572  329664 start.go:925] validating driver "docker" against <nil>
	I1009 18:27:18.486590  329664 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:27:18.512179  329664 out.go:203] 
	W1009 18:27:18.515527  329664 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1009 18:27:18.521594  329664 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-265552 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-265552

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-265552

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-265552

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-265552

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-265552

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-265552

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-265552

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-265552

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-265552

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-265552

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-265552

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-265552" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-265552" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-265552

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-265552"

                                                
                                                
----------------------- debugLogs end: false-265552 [took: 6.726929014s] --------------------------------
helpers_test.go:175: Cleaning up "false-265552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-265552
--- PASS: TestNetworkPlugins/group/false (7.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (86.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1247853213 start -p stopped-upgrade-729726 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1247853213 start -p stopped-upgrade-729726 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (1m2.912927987s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1247853213 -p stopped-upgrade-729726 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1247853213 -p stopped-upgrade-729726 stop: (1.245944638s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-729726 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-729726 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (22.716376631s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (86.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:145: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-847951 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:145: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-847951 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (22.668163362s)
no_kubernetes_test.go:233: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-847951 status -o json
no_kubernetes_test.go:233: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-847951 status -o json: exit status 2 (311.245774ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-847951","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:157: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-847951
no_kubernetes_test.go:157: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-847951: (2.390952237s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-847951 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-847951 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (10.443647381s)
--- PASS: TestNoKubernetes/serial/Start (10.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:180: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-847951 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-847951 "sudo systemctl is-active --quiet service kubelet": exit status 1 (307.848389ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:212: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:212: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (5.592238196s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-847951
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-847951: (1.294759113s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:224: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-847951 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:224: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-847951 --driver=docker  --container-runtime=containerd: (6.897239454s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:180: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-847951 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-847951 "sudo systemctl is-active --quiet service kubelet": exit status 1 (279.875773ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-729726
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-729726: (1.286650802s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.29s)

                                                
                                    
x
+
TestPause/serial/Start (45.97s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-822263 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-822263 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (45.973117076s)
--- PASS: TestPause/serial/Start (45.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (44.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-265552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E1009 18:30:35.382793  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/functional-686010/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-265552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (44.043450065s)
--- PASS: TestNetworkPlugins/group/auto/Start (44.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-265552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-265552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (42.015607214s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.02s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.18s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-822263 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-822263 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.168620846s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.18s)

                                                
                                    
x
+
TestPause/serial/Pause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-822263 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.68s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-822263 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-822263 --output=json --layout=cluster: exit status 2 (320.939569ms)

                                                
                                                
-- stdout --
	{"Name":"pause-822263","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-822263","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-822263 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.72s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-822263 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.72s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.79s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-822263 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-822263 --alsologtostderr -v=5: (2.787089744s)
--- PASS: TestPause/serial/DeletePaused (2.79s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.93s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.870743675s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-822263
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-822263: exit status 1 (17.263722ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-822263: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (1.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (45.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-265552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-265552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (45.289505688s)
--- PASS: TestNetworkPlugins/group/calico/Start (45.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-265552 "pgrep -a kubelet"
I1009 18:31:17.307459  144094 config.go:182] Loaded profile config "auto-265552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-265552 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-phnzg" [0476d2af-5b50-42ab-95e6-3d7ca30e1db5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-phnzg" [0476d2af-5b50-42ab-95e6-3d7ca30e1db5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.00535397s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-265552 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-265552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-265552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-8jfts" [40dbd832-5f0e-4c08-890f-dbf0e82c6093] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007976446s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-265552 "pgrep -a kubelet"
I1009 18:31:35.223913  144094 config.go:182] Loaded profile config "kindnet-265552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-265552 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8xx25" [2e055d54-1a96-4da8-9d7d-669693ecac95] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8xx25" [2e055d54-1a96-4da8-9d7d-669693ecac95] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004581741s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-265552 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-265552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-265552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-265552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-265552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (51.599437642s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-hqmfk" [1730f3d1-39b2-4df3-800d-557f5ab510a8] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005375495s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-265552 "pgrep -a kubelet"
I1009 18:31:59.922890  144094 config.go:182] Loaded profile config "calico-265552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-265552 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-v8nf7" [1c0e3dc4-6290-413f-8b8b-ecbf13022da7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-v8nf7" [1c0e3dc4-6290-413f-8b8b-ecbf13022da7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003772319s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (37.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-265552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-265552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (37.510050477s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (37.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-265552 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-265552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-265552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (57.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-265552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-265552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (57.886084158s)
--- PASS: TestNetworkPlugins/group/flannel/Start (57.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (41.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-265552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-265552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (41.07108309s)
--- PASS: TestNetworkPlugins/group/bridge/Start (41.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-265552 "pgrep -a kubelet"
I1009 18:32:38.414107  144094 config.go:182] Loaded profile config "custom-flannel-265552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-265552 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wk7k5" [5e544076-ef68-4bd0-8ff4-a50f23207b76] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wk7k5" [5e544076-ef68-4bd0-8ff4-a50f23207b76] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003725385s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-265552 "pgrep -a kubelet"
I1009 18:32:43.710901  144094 config.go:182] Loaded profile config "enable-default-cni-265552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-265552 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6nwfw" [2d37567d-18b4-4a8e-8c7c-ffee4ce33a1d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6nwfw" [2d37567d-18b4-4a8e-8c7c-ffee4ce33a1d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.00446245s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-265552 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-265552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-265552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-265552 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-265552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-265552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (52.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-660293 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-660293 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (52.240474835s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (52.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-265552 "pgrep -a kubelet"
I1009 18:33:13.136005  144094 config.go:182] Loaded profile config "bridge-265552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-265552 replace --force -f testdata/netcat-deployment.yaml
I1009 18:33:13.750488  144094 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1009 18:33:14.056943  144094 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t4fkf" [396e62a6-909e-4697-b46d-e0851ffddc62] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-t4fkf" [396e62a6-909e-4697-b46d-e0851ffddc62] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003662723s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (56.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-452646 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-452646 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (56.195416504s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (56.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-9gkds" [504f73ba-0512-4688-8830-243bc593a537] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004435182s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-265552 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-265552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-265552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-265552 "pgrep -a kubelet"
I1009 18:33:25.295758  144094 config.go:182] Loaded profile config "flannel-265552": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-265552 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x47jc" [c64dc4e5-cdc1-467b-8252-9ecc1c47c79d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-x47jc" [c64dc4e5-cdc1-467b-8252-9ecc1c47c79d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004318847s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-265552 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-265552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-265552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (43.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-733169 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-733169 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (43.34485891s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (43.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-749311 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-749311 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (41.74088969s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-660293 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [68068e70-3a8c-4ca7-9a13-37c7f50bbc86] Pending
helpers_test.go:352: "busybox" [68068e70-3a8c-4ca7-9a13-37c7f50bbc86] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [68068e70-3a8c-4ca7-9a13-37c7f50bbc86] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00404542s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-660293 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-660293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-660293 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-452646 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c726fb6e-9964-4060-98aa-de8922f2c4aa] Pending
helpers_test.go:352: "busybox" [c726fb6e-9964-4060-98aa-de8922f2c4aa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c726fb6e-9964-4060-98aa-de8922f2c4aa] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003887745s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-452646 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-660293 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-660293 --alsologtostderr -v=3: (12.115847374s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-452646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-452646 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-452646 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-452646 --alsologtostderr -v=3: (11.96896201s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-660293 -n old-k8s-version-660293
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-660293 -n old-k8s-version-660293: exit status 7 (95.979942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-660293 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (49.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-660293 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-660293 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (49.190546147s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-660293 -n old-k8s-version-660293
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (49.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-733169 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9b36c801-6f44-4cf5-9035-e4a1c721e5d2] Pending
helpers_test.go:352: "busybox" [9b36c801-6f44-4cf5-9035-e4a1c721e5d2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9b36c801-6f44-4cf5-9035-e4a1c721e5d2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003880453s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-733169 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-452646 -n no-preload-452646
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-452646 -n no-preload-452646: exit status 7 (81.558991ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-452646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (45.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-452646 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-452646 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (44.992157209s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-452646 -n no-preload-452646
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (45.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-733169 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-733169 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-749311 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [eb4766c0-594f-407e-b5df-95893b0b18d3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [eb4766c0-594f-407e-b5df-95893b0b18d3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004942844s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-749311 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-733169 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-733169 --alsologtostderr -v=3: (12.447182326s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-749311 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-749311 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-749311 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-749311 --alsologtostderr -v=3: (12.274076099s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-733169 -n embed-certs-733169
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-733169 -n embed-certs-733169: exit status 7 (88.810336ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-733169 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-733169 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-733169 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (48.317752155s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-733169 -n embed-certs-733169
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-749311 -n default-k8s-diff-port-749311
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-749311 -n default-k8s-diff-port-749311: exit status 7 (94.447497ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-749311 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-749311 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1009 18:35:02.707265  144094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-140450/.minikube/profiles/addons-072257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-749311 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (44.675722307s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-749311 -n default-k8s-diff-port-749311
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-kqr64" [e58224f5-7f8a-4416-89d4-6f9ef8077a80] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004227281s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-x7n4d" [7a62cf2e-f3c9-46c9-b3d8-b71eb93f8750] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003894158s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-kqr64" [e58224f5-7f8a-4416-89d4-6f9ef8077a80] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003592367s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-660293 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-x7n4d" [7a62cf2e-f3c9-46c9-b3d8-b71eb93f8750] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004166789s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-452646 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-660293 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-660293 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-660293 -n old-k8s-version-660293
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-660293 -n old-k8s-version-660293: exit status 2 (317.465864ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-660293 -n old-k8s-version-660293
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-660293 -n old-k8s-version-660293: exit status 2 (318.744154ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-660293 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-660293 -n old-k8s-version-660293
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-660293 -n old-k8s-version-660293
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-452646 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-452646 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-452646 -n no-preload-452646
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-452646 -n no-preload-452646: exit status 2 (345.653129ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-452646 -n no-preload-452646
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-452646 -n no-preload-452646: exit status 2 (336.231796ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-452646 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-452646 -n no-preload-452646
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-452646 -n no-preload-452646
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (25.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-976953 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-976953 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (25.783814536s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (25.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-s4v7s" [f85de264-0775-41cd-bcb7-e85847380473] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00356717s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-s4v7s" [f85de264-0775-41cd-bcb7-e85847380473] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003237199s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-733169 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xw84s" [42f5906e-a07c-4744-85d7-a7b3781fa51d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0036155s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-733169 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-733169 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-733169 -n embed-certs-733169
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-733169 -n embed-certs-733169: exit status 2 (318.378104ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-733169 -n embed-certs-733169
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-733169 -n embed-certs-733169: exit status 2 (309.255928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-733169 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-733169 -n embed-certs-733169
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-733169 -n embed-certs-733169
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xw84s" [42f5906e-a07c-4744-85d7-a7b3781fa51d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003182471s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-749311 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-749311 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-976953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-749311 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-749311 -n default-k8s-diff-port-749311
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-749311 -n default-k8s-diff-port-749311: exit status 2 (335.081765ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-749311 -n default-k8s-diff-port-749311
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-749311 -n default-k8s-diff-port-749311: exit status 2 (322.049665ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-749311 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-749311 -n default-k8s-diff-port-749311
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-749311 -n default-k8s-diff-port-749311
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-976953 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-976953 --alsologtostderr -v=3: (1.245716877s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-976953 -n newest-cni-976953
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-976953 -n newest-cni-976953: exit status 7 (70.938371ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-976953 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-976953 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-976953 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (11.425852822s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-976953 -n newest-cni-976953
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-976953 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-976953 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-976953 -n newest-cni-976953
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-976953 -n newest-cni-976953: exit status 2 (301.822152ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-976953 -n newest-cni-976953
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-976953 -n newest-cni-976953: exit status 2 (296.328937ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-976953 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-976953 -n newest-cni-976953
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-976953 -n newest-cni-976953
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.46s)

                                                
                                    

Test skip (25/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-265552 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-265552

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-265552

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-265552

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-265552

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-265552

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-265552

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-265552

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-265552

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-265552

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-265552

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-265552

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-265552" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-265552" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-265552

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-265552"

                                                
                                                
----------------------- debugLogs end: kubenet-265552 [took: 3.800502209s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-265552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-265552
--- SKIP: TestNetworkPlugins/group/kubenet (4.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-265552 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-265552

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-265552

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-265552

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-265552

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-265552

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-265552

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-265552

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-265552

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-265552

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-265552

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-265552

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-265552" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-265552

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-265552

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-265552

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-265552

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-265552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-265552" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-265552

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-265552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-265552"

                                                
                                                
----------------------- debugLogs end: cilium-265552 [took: 3.586399493s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-265552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-265552
--- SKIP: TestNetworkPlugins/group/cilium (3.76s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-466656" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-466656
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard